AI
The Ticking Clock: The Urgent Need to Regulate Deepfake Political Ads Before the US Election
To go back to this article, navigate to My Profile and then click on View saved stories.
Sure, I can help with that
The Reluctant Battle Against Deepfake Advertising
This week, a couple of events unfolded that heightened my concerns regarding artificial intelligence's impact on the US electoral process.
Initially, WIRED released a comprehensive article detailing that voters in India were targeted with more than 50 million deepfake voice calls, which impersonated candidates and political figures. This significant volume of deepfakes has led to widespread confusion among voters, who are mistaking them for authentic communications.
This week, the Federal Communications Commission revealed plans to explore the introduction of regulations for AI-generated advertisements, coming shortly after its decision to prohibit AI-driven robocalls. The question arises as to why the FCC stands alone in adopting fresh regulations concerning AI and election integrity within this year. The recent elections in India serve as a cautionary tale, emphasizing the need for the U.S. to accelerate its regulatory efforts. Yet, it appears the FCC is the sole agency taking initiative in this regard.
Let's discuss this.
The political landscape has never been more peculiar or digital. The WIRED Politics Lab serves as your navigator through the whirlwind of radical views, conspiracy theories, and false information.
The United States Faces a Tight Deadline to Eliminate AI-Generated Misleading Political Advertisements
Recall the instance when the Republican National Committee published an artificial intelligence-created advertisement criticizing Biden? Or the moment when a super PAC supporting Florida Governor Ron DeSantis launched an AI advertisement impersonating ex-president Donald Trump? Nearly a year has passed since the debut of these advertisements, yet, in spite of the initial public uproar, no fresh legislation has been introduced to regulate AI-generated political ads.
In the previous year, Chuck Schumer, the Senate Majority Leader, initiated a series of discussions with various stakeholders and leaders in the artificial intelligence industry to tackle problems posed by generative AI technologies. A key focus for him was safeguarding American elections from potential disruptions caused by this technology, especially before the November elections. Although he has released a report and encouraged senators to enact its recommendations, little progress has been made beyond that.
The Federal Communications Commission (FCC) might not have the same level of power as the legislative body, but it has taken more significant actions between the two entities. In February, following an incident where a robocall mimicai-allcreator.com">king President Joe Biden occurred in New Hampshire, the commission implemented a ban on the employment of generative artificial intelligence in such calls. On Wednesday, Jessica Rosenworcel, the head of the FCC, introduced a new proposal. This proposal suggests that political advertisements on broadcast TV, radio, and certain cable platforms should clearly indicate whenever artificial or manipulated content is utilized.
"In light of the increasing availability of artificial intelligence technology, the Commission aims to ensure that consumers are adequately informed about its use," stated Rosenworcel. She added, "I have presented a proposal to my peers today, emphasizing that consumers deserve to be aware of the use of AI tools in political advertising they encounter. It is my expectation that they will quickly address this matter."
Authored by Carlton
Authored by Emily Mullin
Authored by Andy
Authored by Scott Gilbertson
This is excellent news, however, it's likely that voters will come across a higher number of digital forgeries on the internet as opposed to traditional broadcasting. When it comes to online advertisements, the authorities have yet to propose any remedies.
The advocacy organization, Public Citizen, has formally requested that the Federal Election Commission (FEC) implement regulations mandating clear disclosures for political advertisements across all platforms, similar to those enforced by the FCC. However, the FEC has not yet taken any action. A report from The Washington Post in January indicated that the FEC is expected to reach some ai-allcreator.com">kind of decision by the start of summer. Despite this, with summer quickly approaching, there has been little to no update on the matter. Earlier in the month, the Senate Rules Committee approved three pieces of legislation aimed at governing the application of artificial intelligence in electoral processes, including the requirement for disclosures. Nonetheless, there is no guarantee these bills will be debated on the Senate floor in a timely manner to enact any significant change.
For those seeAI-allcreator.com">king a genuine fright, consider that the presidential election is a mere 166 days away. The window for achieving any progress on AI transparency initiatives is rapidly closing, particularly as the Biden and Trump teams, alongside numerous other candidates, gear up to significantly increase their spending on social media advertisements.
In the absence of formal rules, the burden of safeguarding our electoral processes against false information primarily falls on technology firms. This situation doesn't seem to have changed much since 2020, and I share that sentiment! While the problem might appear novel, it's the same old players at the forefront. In November, Meta announced a mandate for political advertisements to carry warning labels if they employ artificial intelligence-generated material. Meanwhile, TikTok bans political advertisements altogether, yet it insists that its users must tag AI-generated content that includes lifelike pictures, sounds, and videos when posting.
What if a significant error occurs? Indeed, figures like Mark Zuckerberg and other technology leaders might get summoned to testify before Congress a couple of times, yet it's doubtful they'll encounter any regulatory repercussions prior to the election happening.
A great deal is on the line, and time is quickly slipping away. Should Congress or any regulatory body decide to provide direction, they must act within the coming months. Failing to do so could render any attempts futile.
The Discussion Space
In the concluding segment of our podcast episode this week, we invited our audience to share their insights on how their engagement with political content on the internet has evolved since the previous presidential race. Are you heading straight to journalistic websites for the latest on the election? Do you still maintain a positive connection with X/Twitter? Or perhaps you're a follower of newsletters such as ours? I'm eager to hear your stories!
Feel free to post your thoughts on the website or reach out to me directly via email at mail@wired.com.
📝 Share your thoughts in the comment section underneath this post.
WIRED Selections
Craving additional content? Sign up today for endless entry to WIRED.
Additional Reading Recommendations
🔗 Exploring the Simplicity of Programming A.I. Chatbots for Disinformation Purposes: The New York Times developed a pair of chatbots, designed with opposing political viewpoints. These bots were capable of providing biased answers to political inquiries, mimicking the tone and manner in which individuals often communicate on the internet. (The New York Times)
🔗 Positive Update on Biden and Young Voters: Although Biden's current approval ratings among young voters have dipped compared to his 2020 performance, the outlook might not be as bleak as it appears. (The Atlantic)
🔗 OpenAI Lays It All Out: Scarlett Johansson's heated response to OpenAI's latest voice model highlights the organization's relentless data consumption. (The Atlantic)
The Scoop
Allow me a moment to proudly rave about my workspace, please. This week, our WIRED Politics Lab podcast soared into the top 20 of Apple Podcast’s news category. Additionally, Amazon Music named us among the top podcasts of the week!
This week, I've rejoined Leah and David on the podcast to discuss the definitive closure of Twitter (now called X, which is quite frustrating), where digital political messaging is headed, and how this connects with the New York–Dublin Portal. Give it a listen!
Lastly, creating successful posts often involves realizing when you're not up to date on certain topics.
That wraps it up for today—appreciation for your subscription. Feel free to reach out to me through email, Instagram, X, and Signal at makenakelly.32.
Discover More With Us…
Explore the election period through our WIRED Politics Lab newsletter and podcast.
She introduced ChatGPT to President Biden, playing a pivotal role in shaping the future of AI policy.
Laser-based weaponry is on the brink of making its debut in military combat
What occurs when an author of romantic fiction finds themselves unable to access Google Docs
Ease the strain on your spine: Discover the top office chairs we've evaluated
Kelly Makena
Elliott Vittoria
Kelly Makena
Kelly Makena
Kelly Makena
Dhruv Mehrotra
Leah Feiger
Staff at WIRED
Additional Content from WIRED
Evaluations and Instructions
© 2024 Condé Nast. All rights reserved. WIRED may receive a share of the revenue from items bought via our website, as a result of our affiliate agreements with retail partners. Content from this website is not to be duplicated, shared, broadcasted, stored, or used in any form without the explicit consent of Condé Nast. Advertisement Preferences
Choose a global location
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.