AI
Big Tech’s Dual Role in the Battle Against Misinformation: Training Campaigns on GenAI Safeguards and Risks
To go back to this article, navigate to My Profile, and then select View saved stories.
Purchasing through the links in our articles could result in us receiving a commission. Find out more.
Authored by Makena Kelly
Major Tech Companies Provide Both the Problem and Solution for GenAI in Political Campaigns
This week, the Biden campaign is grappling with its first significant incident involving manipulated media, known as a "cheapfake." Altered videos of Biden during the G7 Summit and at a fundraiser in Hollywood have been circulating on platforms like X. These videos falsely depict Biden as getting lost, speaking incoherently, or even having an embarrassing accident. Such clips are precisely the kind of material that fuels conservative media, aiming to highlight Biden's age through deceptive editing similar to the altered video of Nancy Pelosi that was spread in the previous election cycle.
As we find ourselves increasingly overwhelmed by basic editing and cropping tasks, major technology companies are educating political campaigns on how to use their advanced AI tools. Is there a possibility that some guidance could alleviate the problem? It’s possible. But, could it also exacerbate the situation? Quite likely, yes.
Let's discuss this matter.
Politics has reached unprecedented levels of peculiarity and has become deeply entrenched in the digital realm. WIRED Politics Lab serves as your navigator through the whirlwind of extremism, conspiracy theories, and misinformation.
"Introduction to GenAI in Politics
Starting from early 2024, Microsoft and Google have educated numerous political organizations and campaign teams on the use of generative AI technologies, including their innovative chatbots Copilot and Gemini, as reported to WIRED recently."
For a significant period, major technology firms have been organizing seminars for political teams and organizations to deepen their understanding of their offerings, particularly in the realm of cybersecurity. However, this year marks a shift as they've begun to incorporate training on how electoral campaigns can utilize artificial intelligence in preparation for the 2024 election.
Microsoft has customized these training programs to meet the requirements of national campaigns, aiming to reduce their time and expenses. The company showcases the capabilities of Copilot, its AI chatbot, in efficiently composing and modifying emails and text messages for fundraising purposes.
"In an interview earlier this month, Ginny Badanes, who is the general manager for Microsoft's Democracy Forward program, expressed the view that campaigns, just like small businesses, could also benefit from utilizing artificial intelligence."
Last week, in a communication with WIRED, Microsoft announced that they have successfully conducted 90 workshops involving over 2,300 individuals across 20 nations spanning five continents, namely Africa, Asia, Europe, North America, and South America. The firm reported that in the US alone, more than 40 workshops have taken place this year, attracting over 600 attendees. Although the European sessions started towards the end of last year, the US workshops kicked off this February.
Google has also begun incorporating artificial intelligence into its cyber security training programs and initiatives. During these workshops, Google educates attendees on utilizing resources such as its Gemini chatbot for assessing various policy suggestions. Additionally, they demonstrate the capabilities of other platforms like Google’s Data Commons and Lens, which, according to them, assist in evaluating data collections and converting written content from pictures into text.
Authored by Christopher
Authored by Mark
Authored by Caroline Haskins
By Dmitri Alperovitch
Democratic technology experts, such as Zinc Labs' Executive Director Matt Hodges, informed me that initiating training sessions on these platforms at present could avert potential problems in the future.
"Hodges, previously the engineering director for Biden 2020, emphasizes the importance of beginning now rather than waiting six months to get ahead of the curve. Additionally, Zinc Labs offers AI training for political campaigns."
At the start of this year, major technology firms including Amazon, Google, Meta, and Microsoft entered into an agreement to implement "reasonable precautions" aimed at stopping their generative AI technologies from playing a role in potential global electoral disasters. The agreement requires these companies to identify and mark any misleading content that is generated using AI.
Microsoft and Google have also integrated their tagging and watermarking initiatives into their campaign seminars. Microsoft offers a quick tutorial on what it calls "content credentials," or its watermarking solution, detailing to campaigns how they can utilize this technology on their promotional materials to verify their genuineness. Likewise, Google introduces its program, SynthID, which tags images generated using its AI technology.
Big Tech is of the opinion that systems for verifying content could help mitigate the dangers posed by deepfakes, inexpensive forgeries, and various other types of content manipulated through artificial intelligence, which have the potential to interfere with US elections.
Despite adopting the technology agreements and other self-imposed protocols, as previously highlighted by WIRED's Kate Knibbs, all these verification techniques have their vulnerabilities and are not completely infallible.
The situation is slightly more nuanced than merely advocating for content validation by Microsoft and Google. Their respective AI chatbots, named Copilot for Microsoft and Gemini for Google, have also shown deficiencies in responding to straightforward inquiries about electoral history. According to a report by my colleague David Gilbert last week, when questioned about the victor of the 2020 presidential election, neither of the chatbots furnished a reply. These are the same algorithms intended to offer strategic advice to political campaigns. Furthermore, they underpin the artificial intelligence bots designed to field questions from voters or even to serve in the capacity of candidates.
Half a year away from Election Day, major technology companies are simultaneously presenting the problem and the solution regarding gen AI for political campaigns. Despite the possibility that their verification systems may be able to detect content created by AI with perfect accuracy, it is probable that governmental action would be required to ensure uniform application of this technology across the industry.
For the foreseeable future, and likely through the end of the year, the responsibility lies with the AI sector to avoid any critical errors in generating or identifying detrimental content.
The Discussion Forum
Ever since I delved into Annie Jacobsen’s captivating work, “Nuclear War: A Scenario,” my interest has been piqued in exploring narratives around the apocalypse. Just some light-hearted interests! ★~(◠‿◕✿)
This week, I'm eager for you to inundate my email with the biggest concerns you have about AI and the numerous elections happening this year. I'm in search of fears that are not only frightening but also grounded in reality.
Your feedback is important! Feel free to post a comment below or reach out via email at mail@wired.com.
🗨️ Drop your thoughts in the comments section beneath this piece.
WIRED Selections
Interested in additional content? Sign up today for limitless entry to WIRED.
Further Reading Suggestions
🔗 Exploring Political Engagement Across TikTok, X, Facebook, and Instagram: Even with a shift at the helm, X (previously known as Twitter) remains the preferred choice for individuals looking to stay updated with political developments. A survey indicates that the Republican user base has shown increased satisfaction with the platform since Elon Musk took over. (Pew Research)
🔗 US Surgeon General Advocates for Cautionary Notices on Social Media: In a commentary piece for The New York Times, Vivek Murthy, the US surgeon general, presents his argument for the necessity of implementing warning labels on social media services. This appeal is made in anticipation of the forthcoming verdict in the case of Murthy v. Missouri, which is slated for release this summer. (The New York Times)
🔗 IN THE SPOTLIGHT: Opponents seize on Biden's hesitation exiting a lavish LA event for fundraising: The election campaign for Biden is now dealing with its initial significant incident of a cheapfake controversy. Videos from various notable gatherings, including the latest G7 summit, have spread rapidly on social media sites such as X, following manipulative editing aimed at amplifying concerns over Biden’s advancing years. (AP)
The Latest Episode
In the most recent episode of the WIRED Politics Lab podcast, presenter Leah Feiger engages in a conversation with fellow staff member and seasoned journalist David Gilbert. They delve into Gilbert's latest investigative work on a militia group spreading across the country, which is led by an individual currently imprisoned for their involvement in the January 6 uprising. Available on all major podcast platforms.
Catch up with you next week! Feel free to contact me through email, Instagram, X, and Signal at makenakelly.32.
Explore Further…
Dive into the electoral season with our exclusive WIRED Politics Lab newsletter and podcast.
Skeptical about breakdancing's status as an Olympic sport? The global champion concurs (sort of)
Researchers unlock a decade-old $3 million cryptocurrency wallet by deciphering its password
The surprising emergence of the first-ever beauty contest judged by artificial intelligence
Ease the strain on your spine: Discover the top office chairs we've evaluated.
Morgan Meaker
Melissa Clarkson
Luca Zorloni
Nena Farrell
Michael Croley
N/A
Mullin,
Hall of Parker
Additional Content from WIRED
Evaluations and Tutorials
© 2024 Condé Nast. All rights reserved. Purchases made through our website may result in WIRED receiving a share of the sales, as a result of our affiliate agreements with retailers. Replicating, distributing, broadcasting, storing, or using the content found on this website in any form is strictly prohibited without the explicit written consent of Condé Nast. Advertisement Options
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.