Connect with us

AI

AI, Deepfakes, and the Election: Navigating the New Frontier of Digital Propaganda

Published

on

To go back to this article, navigate to My Profile, and then click on Saved stories to view them

Is it Possible for AI and Deepfakes to Influence the American Election?

Purchasing through the links in our articles may result in us receiving a commission. This contributes to our journalistic efforts. Find out more. You might also want to think about subscribing to WIRED.

Not long ago, there was widespread concern regarding the influence of artificial intelligence on the 2024 electoral process. However, it appears that some of that anxiety has lessened, yet the prevalence of politically motivated deepfakes, including explicit content, remains a significant issue. In today's episode, WIRED journalists Vittoria Elliott and Will Knight discuss the evolving landscape of AI and the current concerns surrounding it.

You can find Leah Feiger on Twitter as @LeahFeiger, Vittoria Elliott as @telliotter, and Will Knight at @willknight. Feel free to send us an email at politicslab@WIRED.com. Don't forget to sign up for the WIRED Politics Lab newsletter by following this link.

Highlighted this week: OpenAI Evaluates Its Influence Tactics, penned by Will Knight. The Struggle to Identify AI-Generated Fakes Endangers Electorates in the Global South, authored by Vittoria Elliott. The 2024 Election Marks a Turning Point for Generative AI, also by Vittoria Elliott.

How to Tune In

To catch this week's podcast, you can use the audio player available here. However, if you're interested in subscribing at no cost to receive all episodes, follow these steps:

For those using an iPhone or iPad, launch the Podcasts app, or simply click on this link. Alternatively, you can install apps such as Overcast or Pocket Casts, and look for WIRED Politics Lab. We're available on Spotify as well.

Transcript Notice: Please be aware that this transcript was produced automatically and may include inaccuracies.

Leah Feiger: Welcome to WIRED Politics Lab, the podcast that dives into the intersection of technology and politics. I'm your host Leah Feiger, WIRED's lead editor on politics. A couple of months back, there was a surge of concern regarding the impact of artificial intelligence on the upcoming 2024 US presidential election. The sophistication and ease of creating and disseminating AI-generated content, including images, audio clips, and videos, had reached a new peak. Our team here at WIRED, focusing on the role of AI in global electoral processes, has even dubbed the 2024 elections as the epoch of generative AI. Although the initial wave of apprehension regarding AI has diminished, the proliferation of deepfake content involving figures like Kamala Harris, Joe Biden, Donald Trump, and their followers remains rampant. In today's episode, we delve into the complex landscape of legislation concerning political deepfakes and AI-generated adult content. With the election on the horizon, we're here to dissect what's changed, if anything, and the extent of our concerns over AI's influence. Joining our discussion today are WIRED's AI aficionados, including politics correspondent Vittoria Elliott—

Vittoria Elliott: Greetings, Leah.

Leah Feiger: Hi, Tori. Joining us from Cambridge, Massachusetts is senior writer Will Knight. Will, it's great to have you with us for your first time.

Will Knight: Yes, greetings. I appreciate the invitation.

Leah Feiger: To kick things off, if you're comfortable, let's dive into the topic of pornography. Tori, your major piece released today focuses on how various American states are confronting AI-generated adult content. Can you share some insights? How is this being addressed?

Vittoria Elliott explains that the approach to tackling nonconsensual deepfake pornography is fragmented due to the absence of unified federal laws. Representative Alexandria Ocasio-Cortez, who herself fell victim to such deepfakes, proposed the Defiance Act. This legislation would empower victims to legally pursue the creators and distributors of these deepfakes, provided they can prove the content was produced without consent. Similarly, Senator Ted Cruz has put forward the Take It Down Act, aimed at compelling online platforms to remove such content. Despite these legislative efforts, there has been little progress in recent months. The issue remains pressing, especially with the increasing incidence of teenagers using AI technology to create and distribute explicit content of their classmates, often as a form of harassment. The problem is particularly pronounced among women, who are disproportionately targeted by this abuse, even though the technology also finds applications in political arenas.

Leah Feiger: Could you delve a bit deeper for me? Which states are stepping up in this situation? When you mention it’s becoming congested at the federal level, what exactly does that entail?

Vittoria Elliott: Essentially, what this means is that there are proposals waiting for consideration, but they're not advancing much at the moment. Congress has a lot on its agenda, and with an upcoming election, much of the immediate attention will shift towards campaign efforts. In contrast, state legislatures have a bit more flexibility to act swiftly. Additionally, this issue, which involves safeguarding young individuals and women from online abuse through the use of technology, is one that easily garners bipartisan support.

Leah Feiger: Could you elaborate on the measures states are implementing for safeguarding individuals in this context, and how do these measures manifest? Are there common strategies being adopted, or how do these vary?

Vittoria Elliott highlights that the approach to legislating against explicit deepfake content varies significantly across different states. For example, Michigan is currently considering legislation that specifically targets explicit deepfakes of minors, allowing victims to take legal action against the creators. In certain jurisdictions, creating such content could also lead to criminal charges, potentially resulting in imprisonment. Elliott notes that there are already numerous regulations concerning the possession of explicit material featuring minors, providing a foundation for lawmakers to expand upon.

Leah Feiger: It's clear that AI has been utilized in the creation of adult content for quite some time. What's prompting certain lawmakers to take action on this matter now?

Vittoria Elliott highlighted a particularly pressing issue this year, pointing out that the concern surrounding AI's role in politics is indeed significant and very much a reality. However, she emphasizes that a substantial portion of AI-generated material is actually pornographic, disproportionately targeting women, often without their consent. Elliott shared insights from her conversation with Matthew Bierlein, a Republican state legislator in Michigan, who became actively involved in combating nonconsensual deepfakes. His interest was initially sparked by deepfake political advertisements, leading him to propose legislation aimed at addressing them as his first major act upon taking office. His initial focus on regulating deepfake political ads as a campaign finance issue eventually broadened to include a wider scope of AI-generated content abuses, specifically nonconsensual deepfakes.

The case of Taylor Swift, who fell victim to a nonconsensual deepfake that spread rapidly on social media, especially on X, without any means for Swift to remove it, was a pivotal moment for Bierlein and his legislative partner. Despite Swift's considerable influence and wealth, her struggle to protect her image underscored the urgency and significance of advancing legislation in this area. This incident served as a catalyst for Bierlein and his co-sponsor to intensify their legislative efforts, recognizing the critical need to address and prevent such violations.

Leah Feiger: Clearly, there are corporations involved, and Will, with your extensive knowledge of these firms and their efforts or lack thereof in implementing safeguards. Is the prevalence of AI-generated explicit content really as rampant as it seems? It feels rather bold even to inquire, yet it seems to be a pervasive and unending issue.

Will Knight suggests that the widespread availability of open-source or unrestricted software makes it relatively easy for anyone to access and use such tools. The technology behind image generation is publicly accessible, allowing individuals to replicate what larger, more restrictive companies offer. While these companies often have rules against creating celebrity likenesses or explicit content, there are ways around these limitations. Knight points out that the capability to fabricate images has always existed for those with sufficient resources. However, the advent of AI has democratized this ability, making it simple for anyone with internet access to create fake images. Online platforms, particularly Discord servers, abound with communities dedicated to generating a myriad of images, signaling that controlling this technology is now more challenging than ever.

Leah Feiger: It's proliferating rapidly. We're not talking about deepfake pornography here, but rather Elon Musk's X platform is… My For You page seems to be overrun with AI-generated images, a significant portion of which are shared by Elon Musk directly. Just this week, did everyone catch that picture of Kamala Harris in a communist-style cap, clad in red, accompanied by his comment suggesting that a future without Trump would look like this? It appears he's doing this without facing any consequences.

Will Knight: Absolutely, it's intriguing to observe this shift. Initially, there was this belief or narrative that deepfakes would deceive the public by depicting individuals in compromising situations. However, what we're witnessing isn't quite that. Instead, deepfakes have morphed into basic yet effective tools for spreading propaganda. While they might deceive some, their primary use appears to be in mocking individuals or creating propaganda en masse. Take the example of "Comrade Kamala." It's particularly interesting as it also sheds light on the inherent biases within AI programs. These programs struggle to create convincing likenesses, as evident in their representation of Kamala Harris.

Leah Feiger: Absolutely, I do require subtitles to fully grasp the ongoing events. However, you've hit the nail on the head regarding the evident satire, which is equally perilous. Tori, regarding your recent piece on the laws aimed at combating deepfake pornography, are they making any progress? Are there any jurisdictions that have effectively managed to regulate it?

Vittoria Elliott expresses uncertainty about any single state having a comprehensive solution to the issue. She notes that while 23 states have enacted legislation of some sort, the problem lies in the lack of uniformity among these laws. There are significant differences in legal focus, with some states concentrating on protecting minors and others on safeguarding adult women. This variation creates challenges in cross-state investigations due to differing legal standings on certain actions, further complicated by the internet's lack of geographical boundaries. Elliott points out that while these disparate laws might be somewhat effective at a local level, such as in schools or within the context of domestic abuse, they fall short in addressing broader, more widespread problems, making enforcement difficult.

Leah Feiger: Will, in your opinion, are major AI firms like OpenAI collaborating with governmental bodies to establish guidelines? I'm curious about the kind of inquiries we should be making to inform this legislation, beyond implementing outright prohibitions.

Will Knight shared his insights, noting that technology experts are indeed collaborating with political figures to a certain extent, offering guidance. They've agreed to implement certain technologies aimed at marking images to verify their authenticity. However, as Tori has explored in her writing, it's an ongoing challenge. The technology continues to advance, making it increasingly difficult to detect deepfakes as perpetrators find new ways to bypass these detection methods. Knight discussed his conversation with Hany Farid, a renowned authority on identifying deepfakes, whom Tori is familiar with. Farid, who has launched a new venture, believes that combating deepfakes will become as commonplace as using anti-malware or spam filters. He predicts a future where a wide range of entities, from corporations to individuals, will need to employ technology specifically designed to identify these deceptive creations. Farid also envisions a shift towards more personalized protective measures, extending beyond the political realm. He points to instances of revenge porn as an early indicator of this trend, alongside successful financial frauds where a brief appearance by a manipulated image of a CEO could lead to significant financial loss. This suggests the potential for a broader impact, affecting a wider audience.

Leah Feiger suggests that pornography is merely the beginning of a broader issue, alongside concerns about Comrade Harris. She appreciates the discussion. Considering the diverse issues being discussed, she believes it's logical for the federal government, especially within the United States, to take action. Individuals who have been unwillingly featured in AI-created adult content, such as AOC, have made efforts to persuade Congress to address and regulate this matter, as Tori mentioned. However, there seems to be a delay in making progress on this front.

Vittoria Elliott expressed her belief that it's not a matter of politicians being indifferent to certain issues, but rather these issues being buried under a pile of other concerns. The complexity of addressing some topics, such as the harm caused by deepfakes, particularly those targeting adult women, adds to the challenge. Elliott pointed out, through her conversation with a lawyer, the difficulty in proving malicious intent behind creating deepfakes, as perpetrators rarely leave direct evidence of their intentions. This aspect makes legal action challenging. Furthermore, Elliott highlighted discussions with Kaylee Williams, a doctoral candidate at Columbia University studying nonconsensual deepfakes. Williams noted the perception difference when it comes to deepfakes of celebrities or public figures like Taylor Swift or AOC. Creators of such content often view their work as homage or fan art, not recognizing the abusive nature of their actions, complicating the process of proving harmful intent. Elliott underlined the challenge is not solely about public concern but also about the feasibility of implementing and enforcing laws against such acts. She also mentioned that while there are extensive laws targeting cybercrime and child exploitation, extending these protections to include adult women faces additional hurdles.

Leah Feiger: Absolutely. It seems they indeed face a journey to manage everything. We'll pause for a moment, and upon returning, we'll delve deeper into AI's influence on the 2024 election.

It seems like there

Leah Feiger: Returning to WIRED Politics Lab, Tori and Will, you're constantly immersed in AI coverage. It appears the alarm regarding AI's impact on our elections has calmed down. A recent article in The New York Times was headlined “The Year of the AI Election That Wasn't.” Would you concur with this perspective? Has the apprehension surrounding deepfakes diminished, or are there still conversations happening with individuals who are seriously worried about the potential developments in the upcoming months?

Will Knight believes that although it hasn't been a major problem yet, the worry remains because there's a possibility that a very persuasive deepfake could emerge close to election time, potentially causing significant influence.

Leah Feiger: Correct.

Will Knight highlighted an intriguing aspect, noting the lack of truly persuasive deepfakes so far. The circulation of manipulated images, such as those tagged "Comrade Harris," exemplifies a widespread and troubling effort to undermine factual accuracy, a trend that has garnered significant participation. This phenomenon was particularly evident in discussions led by Trump regarding artificially enhanced crowd sizes, a claim that may not have significantly impacted the broader public but seemed to resonate with his followers. The prevailing notion that reality can be disputed and that truth is negotiable has been developing over time. Knight suggests that this approach to distorting reality could potentially have significant implications.

Leah Feiger: The occurrence is daily. We've touched on the issue with Comrade Harris and exaggerations about crowd sizes. But it doesn't stop there, recall a few weeks back when Trump disseminated images and posts of artificial intelligence-created supporters, calling them Swifties for Trump. The visuals showed masses of young female fans donned in Swifties for Trump attire, which was quite startling. Absolutely, Will, I concur, X has clearly become a platform that's no longer reliable for factual information or news, but rather, it's become intensely amplified.

Vittoria Elliott emphasizes the notion that while it may not deceive individuals, it serves as potent propaganda. She points out that even though she's aware that Lord of the Rings isn't real, she nevertheless finds herself moved to tears whenever Sam carries Frodo up the mountain.

Leah Feiger: It's wonderful that even while doing this podcast, we continue to discover new things about each other.

Vittoria Elliott highlighted that while the fictional portrayal of "Comrade Kamala" is easily identified as untrue, it nonetheless taps into deep-seated perceptions some individuals hold about her. Elliott emphasized the broader conversation regarding artificial intelligence (AI) in electoral processes, pointing out that many immediately think of deepfakes. However, the use of AI extends far beyond just deepfakes. Through discussions with various experts for the AI Global Elections Project, it was revealed that AI applications range from drafting speeches using ChatGPT to automating engagement efforts, such as making phone calls to voters in India. These AI applications are not inherently designed to mislead, yet their integration into campaign strategies is undeniable. Elliott speculated that as the year draws to a close, it's likely more campaigns will openly acknowledge their extensive use of AI—not in ways that overtly aim to deceive, but rather through more subtle methods like precise voter targeting, automated interactions, and chatbots. These techniques may not be as conspicuous because they operate behind the scenes, avoiding the negative connotations often associated with AI. Despite this, Elliott believes the impact of AI in elections is significant and perhaps underappreciated in discussions that focus too narrowly on certain aspects.

Leah Feiger reported, based on information from The New York Times, that AI firms have struggled to market their technology effectively to political campaigns. These companies attempted to deploy AI-driven calling systems to connect with voters, a strategy Feiger highlighted in her tracker. While this tactic found success among Indian voters, it failed to resonate with Americans, who often disconnected the call upon realizing an AI bot, representing a political figure or campaign, was on the line. Will, the question arises, how are these companies adapting to such challenges? As Tori mentioned, there's been an effort to integrate these technologies into the workplace seamlessly, positioning them alongside conventional tools like Microsoft Word and Excel, including AI bots. However, this approach hasn't quite hit the mark. What strategies are companies employing to navigate these obstacles?

Will Knight highlights the nascent stage of the widespread adoption of language models, including those with audio and video capabilities. Despite challenges in market acceptance, there's significant effort behind the scenes to refine these technologies and enhance their persuasive power. A key factor in ChatGPT's triumph lies in its ability to convincingly interact with users, delivering responses that may not always be factual but are tailored to what users expect to hear. OpenAI's introduction of a voice interface, offering emotional and social nuances akin to human conversation, like what we experience in podcasts, aims to enrich user engagement. This technology, even if initially met with skepticism, taps into the human desire for connection, as evidenced by the popularity of AI companions for their emotional resonance.

Looking forward, the potential for businesses to leverage these advanced tools for more effective persuasion is substantial. Existing studies have already shown that conversing with a language model can alter a person's views, indicating the possibility of these models becoming even more influential over time. This could revolutionize advertising and has profound implications for political discourse, where chatbots could sway public opinion by promoting specific narratives or ideologies. Such a development suggests the onset of a competitive landscape where the ability to influence through AI becomes a pivotal battleground.

Leah Feiger: The risk is palpable. Reflecting on your report from a few months back, Will, there's this particular story about the impact of AI on influencing individuals that continues to linger in my thoughts. It's the discussion around Sam Altman from OpenAI and his claims regarding the technology's potential to modify human behavior. The prospect of such influence being misused becomes increasingly conceivable as AI advances in capability and as our dependence on it potentially grows. Do we face the possibility of AI being employed to alter electoral outcomes by shifting people's opinions in the future?

Will Knight suggests that without significant efforts to limit developments, it's probable we're heading in a direction where advanced AI assistants, equipped not only with intelligence but also perceived empathy—akin to a skilled salesperson—could potentially persuade individuals into a wide array of actions.

Leah Feiger: Could you explain the safety measures implemented to prevent this? This aspect of our discussion seems to be the most alarming so far.

Will Knight: They are beginning to investigate the possibilities, with protective measures in place to prevent clear political applications of Large Language Models (LLMs). They aim to observe its impact on human behavior, yet this exploration is happening in an uncontrolled environment, which is quite astonishing.

Vittoria Elliott expressed her belief that it's premature to dismiss this technology as useless or lacking in influence. She recalled the concerns her parents had when she first created a MySpace account, cautioning her about the internet being a haven for predators and misinformation, and advising her not to trust everything she read online. Elliott pointed out that judging the future impact of social media on our perception of information based on its early days would have led us to underestimate its role. Initially, it seemed like social media was merely a platform for exchanging music and listing friends rather than a space where political beliefs could be shaped and shared.

Leah Feiger: I truly long for those times, absolutely.

Vittoria Elliott remarked, similarly, that within a decade we found ourselves grappling with the reality that this platform had become a crucial arena for political conversations, capable of influencing election outcomes. She emphasized that we are currently at a juncture where the impact of AI on elections remains uncertain. Despite possibly underestimating AI-generated content as easily identifiable and dismissible, Elliott warns against complacency. She suggests that the rapid evolution of technology might lead us to reflect on this period with astonishment, questioning our previous disbelief in its potential to affect significant outcomes.

Leah Feiger: Absolutely, let's delve into the topic of identifying deepfakes. Given that deepfakes have evolved to a point where they can genuinely deceive individuals, it's clear that numerous firms have emerged, boasting their ability to recognize these deepfakes. Will, how effective are these solutions?

Will Knight: Absolutely, there are numerous methods to identify deepfakes, ranging from examining the actual file to scrutinizing the visual or auditory elements. Clearly, the solution leans towards more AI, yet it's important to acknowledge that the effectiveness of these detection tools is somewhat lacking. This can be illustrated by testing various examples, where many of these tools fall short in reliably identifying all instances, leading to what can be described as a continuous battle for supremacy.

Leah Feiger: You know, Tori, in your recent report, you touched on the dismal state of deepfake detection beyond the US and Europe. Can you shed light on the difficulties encountered there? It appears to be a widespread issue, but what makes it even worse in regions outside the US and Europe?

Vittoria Elliott points out a significant hurdle; the bulk of the training data for AI, including both generative AI and its detection tools, is predominantly sourced from white, English-speaking, Western populations. This bias is causing difficulties in accurately creating or identifying deepfakes of individuals like Kamala Harris, due to the lack of diverse representations in the training sets. The issue extends beyond appearances, affecting non-English speaking communities where the AI often misidentifies genuine text as AI-generated due to unique syntax patterns. Moreover, the lower quality media produced by inexpensive devices, common in various regions, further complicates the AI's ability to discern real from fake content. Such inaccuracies aren't just a problem in the global south; they're widespread. For example, Sam Gregory from Witness shared that their service, which helps civil society and journalists detect deepfakes, observed that adding background noise to a fake audio clip of Joe Biden could fool AI into believing it's authentic. This underscores the current limitations and inconsistencies of AI detection models.

Leah Feiger: With the election just over two months away, I've noticed an emergence of businesses claiming they can identify whether something is or isn't generated by AI. The landscape has shifted significantly in recent months. Earlier in the year, when experts released their podcasts and articles forecasting the impact of AI on the election, it was impossible to foresee the current scenario where Biden isn't a contender, and Trump is facing off against Harris instead. This presents a stark contrast from what was anticipated. What are your thoughts on the developments we might expect moving forward? What should we be vigilant about?

Will Knight posits that it's challenging to imagine any evidence that could truly be detrimental to Donald Trump currently. However, should any incriminating audio surface, he suggests Trump would likely assert it was fabricated using artificial intelligence.

Leah Feiger: Agreed.

Will Knight mentioned that AI technology designed to identify deepfakes might not always be reliable, suggesting that its conclusions could be ambiguous. He speculated that such technology could play a role in scenarios where there's potentially incriminating evidence, particularly if another recording were involved.

Leah Feiger observed how peculiar it is that there seems to be a lack of understanding, despite the widespread awareness of AI technology and deepfakes. This awareness alone significantly influences conversations around the topic. For instance, Trump might assert that an image, possibly showing the number of attendees at a Kamala Harris event, is the result of AI manipulation. Or, as mentioned earlier, an especially damning piece of evidence might surface, with accusations of technological tampering, even when no such tech has been used.

Vittoria Elliott: Essentially, what we're dealing with here is what specialists refer to as the liar's dividend. This concept suggests that when everything could be fabricated, nothing seems authentic. Reflecting on the year 2016, particularly the Access Hollywood incident, it strikes me that if a similar event were to unfold today, it could be easily dismissed with a quick tweet or a Truth Social update claiming, “This is the work of AI; it's not actually me.” This approach provides a convenient escape, and it appears that we'll continue to witness the strategic use of this technology to sow doubt and erode our collective sense of what's true.

Leah Feiger believes it's crucial to focus on the detailed aspects of how AI is being utilized, especially for propaganda purposes. While people might be aware that certain images or movements, like Swifties for Trump, are fabricated, the potential for these creations to impact opinions and spread misinformation is significant. Feiger is particularly worried about the upcoming two months, suggesting that there are high stakes involved. Questions such as whether a ballot box is being transported out of Nevada by someone with Michigan plates are just the tip of the iceberg. This period is especially sensitive due to ongoing skepticism from individuals who still question the legitimacy of the 2020 and 2022 elections. These groups are especially susceptible to believing and spreading false claims, making AI an incredibly effective tool for misinformation. Feiger questions our readiness to handle such challenges.

Vittoria Elliott: Negative.

Will Knight: Certainly not.

Vittoria Elliott: Negative.

Leah Feiger: Absolutely not. A unanimous rejection from all present in the room.

Will Knight pointed out the critical importance of establishing and agreeing upon a common truth, especially now when it's being challenged more than ever. Echoing Tori's thoughts, he mentioned an intriguing point highlighted in a book from a few years back titled "The Death of Truth," which examined how questioning the very notion of truth became a method of manipulation during Trump's first term. Knight emphasized the danger in adopting a belief that truth is relative, a perspective that has unfortunately found its foothold in some political circles.

Leah Feiger: I'm eagerly anticipating our future discussion, whether it happens in a few weeks or months, about the concept of truth's variability in all these matters. What instances we'll highlight remains uncertain. I deeply appreciate both of you for being here today. We'll pause for a brief intermission, and upon our return, we'll dive into the Conspiracy of the Week.

Certainly, I'd

Leah Feiger: Greetings once again to WIRED Politics Lab. We've reached the moment for Conspiracy of the Week, a segment where our invited participants share the most intriguing conspiracy theories they've stumbled upon lately or those they hold dear. It's my role to choose the top pick. I'm thrilled. Tori, you've been eager to claim victory for quite some time. What's on your agenda for us today?

Vittoria Elliott: Essentially, what I'm presenting to you are two choices, yet both are centered on the person I'm truly dating, RFK Jr. I've set up Google Alerts for him. I also follow his Telegram channel, which I regularly monitor. We share a strong connection.

Leah Feiger: It genuinely fills me with joy to know that this has contributed to your experience of the election coverage. I'm pleased to see you've developed such a connection with a candidate who is no longer in the presidential race.

Vittoria Elliott: Indeed, a representative for the campaign.

Leah Feiger: Naturally, acting as a campaign representative. Alright, let's hear it. What do we have?

Vittoria Elliott: Alright. First off, it's clear that whenever there's some unfavorable news concerning RFK, some bizarre incident involving animals seems to emerge. Initially, it was the discovery of a deceased bear in Central Park coinciding with the publication of an article in The New Yorker. Following his decision to end his political campaign and endorse Donald Trump for president, there was an odd tale involving a whale's head that he—

Leah Feiger: Also, remember the incident with the dogs earlier. Will, wouldn’t you prefer being on the politics team and constantly involved in this?

Will Knight: Definitely.

Leah Feiger: Actually, it pertains to the wildlife section.

Vittoria Elliott: However, his TikTok uploads where he feeds the neighborhood ravens have not been without their detractors. Personally, I find that aspect quite fascinating. And a little-known fact for you – a collection of ravens is termed a conspiracy, making it my top choice for a conspiracy.

Leah Feiger: That's terrible. That's really awful, Tori.

Vittoria Elliott: It's my pleasure. I've got a fresh Conspiracy of the Week for you, and it ties back to RFK, which I thought you'd get a kick out of. I'm all for the concept of a conspiracy of ravens myself. Moving on, earlier in the year, during an event in New York in April, RFK Jr. made a claim that the CIA has infiltrated the American journalism industry as part of a broader scheme to take control. He pointed out that many individuals at the helm of major media outlets have ties to the CIA. Specifically, he accused the newly appointed leader of NPR of being a CIA operative. The notion that we're not just overworked and underpaid journalists, but secret agents, is quite amusing to me. And, just hypothetically, if there happens to be a secret cache of governmental funds, I've got a few ideas about my next assignment we could talk about later.

Leah Feiger: Okay, that's an excellent point. Thanks, Tori. Will, what's your take on this?

Will Knight: It's quite the challenge to match up to RFK, but in true CIA agent fashion, I'll delve into the more bizarre aspects of AI, specifically its intersection with philosophy. Let's talk about something known as Roko's basilisk. Originating from mythology, the basilisk is a serpent-like creature with the deadly power to kill anyone who locks eyes with it. This concept inspired a theoretical scenario proposed on an AI discussion platform. It suggests that a superintelligent entity in the future might create a simulated reality where we all exist, and it could use this simulation to punish those who either acted against its creation or even entertained the thought of doing so. This idea was explored in a discussion where…

Leah Feiger: Astonishing.

Will Knight mentioned that on certain discussion platforms, discussions about Roko's thought experiment, known as Roko's basilisk, were prohibited. The rationale was that merely considering the concept might pose a risk, a notion he finds especially absurd.

Leah Feiger: That's hilarious. On what forums is this spreading, or not spreading?

Will Knight mentioned that the source was LessWrong, a renowned online platform focused on discussing the dangers and alignment issues related to artificial intelligence.

Leah Feiger: How frequently do you find yourself contemplating Roko's basilisk?

Will Knight: Honestly, I stumbled upon it not too long ago, and I make an effort to avoid dwelling on it, just to be safe. It reminds me of Pascal's wager, right? It's essentially a gamble on the emergence of superintelligence, which means you're somewhat compelled to work towards its realization. Yes, it's utterly insane.

Leah Feiger: Ah, that's an excellent choice. Alright. Oh, this one's a bit challenging this week, but I'm leaning towards Tori. Time for some CIA intelligence work.

Vittoria Elliott: At last. Did the Ravens push me to my limit? I'm eager to find out.

Leah Feiger: Seeing the work you put into the Ravens really impressed me. I appreciated it because it showed your dedication, earning you top marks for both effort and accomplishment. Well done.

Vittoria Elliott: I appreciate it.

Leah Feiger: Also, I find it hard to declare it a victory for something that's off-limits to my thoughts from now on. Tori and Will, I really appreciate you both being here. You were fantastic participants.

Vittoria Elliott: Appreciate it, Leah.

Will Knight: I appreciate the opportunity to be here.

Leah Feiger: Thank you for tuning into WIRED Politics Lab. If you enjoyed our episode today, don’t forget to subscribe and leave a rating on your preferred podcast platform. Additionally, we offer a weekly newsletter penned by Makena Kelly. You can find the subscription link for the newsletter along with the WIRED articles we discussed today in the episode notes. Should you have any inquiries, feedback, or ideas for future episodes, we warmly invite you to reach out to us at politicslab@WIRED.com. Again, that's politicslab@WIRED.com. Your input is greatly appreciated. The production team of WIRED Politics Lab includes Jake Harper as producer, with Pran Bandi handling studio engineering. Amar Lal was responsible for mixing this episode. Our executive producer is Stephanie Kariuki, and Chris Bannon oversees global audio operations at Condé Nast. I, Leah Feiger, have been your host. Join us again next week for another episode in your feed.

Suggestions for You…

Direct to your email: A selection of our top stories, curated daily just for you.

A faulty CrowdStrike patch led to global computer outages

Major Headline: The Potential Timeline for the Atlantic Ocean's Disruption

Entering the age of extreme online consumption

Additional Content from WIRED

Evaluations and Tutorials

© 2024 Condé Nast. All rights protected. WIRED could receive a share of revenue from items bought via our website, thanks to our Affiliate Agreements with retail partners. Content from this website must not be copied, shared, transmitted, stored, or utilized in any form without explicit prior written consent from Condé Nast. Advertising Options

Choose a global website


Discover more from Automobilnews News - The first AI News Portal world wide

Subscribe to get the latest posts sent to your email.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

SUBSCRIBE FOR FREE

Advertisement
Moto GP32 mins ago

Jack Miller Joins Pramac Yamaha: Finalizing the 2025 MotoGP Line-Up at Misano

Moto GP55 mins ago

Rival Team’s Heartfelt Tribute: Luca Salvadori Posthumously Crowned Champion After Tragic Accident

F11 hour ago

Christian Horner Criticizes McLaren’s Team Orders Confusion Amid Oscar Piastri’s Rise

F12 hours ago

Mercedes Anticipated Lewis Hamilton’s Baku Struggles, Says Toto Wolff

F12 hours ago

Lando Norris Fumes Over ‘Unfair’ Yellow Flag, But Battles Back to Finish Fourth in Baku

Sports3 hours ago

Sky Sports F1’s Rachel Brookes Misses Azerbaijan GP Due to Emergency Surgery

F13 hours ago

George Russell Demands ‘Serious Conversations’ After Being ‘Infuriated’ by Pirelli F1 Tyres at Azerbaijan Grand Prix

Sports3 hours ago

Piastri’s Masterclass in Baku Propels McLaren to Constructors’ Lead Amidst Dramatic Azerbaijan GP

Automakers & Suppliers3 hours ago

**”Driving the Future: Top Innovations and Technologies from Lamborghini, the Prestigious Italian Luxury Car Manufacturer”**

F13 hours ago

**Franco Colapinto’s ‘Dream Come True’: Reflecting on His Inspirational Encounter with Lewis Hamilton in Baku**

Sports4 hours ago

Missed Victory and Heartbreak in Baku: Horner Claims Perez’s Azerbaijan GP Win Thwarted by Clash with Sainz

F14 hours ago

Rising Star: Oscar Piastri’s Baku Masterclass Signals a Future F1 World Champion

Automakers & Suppliers4 hours ago

Driving the Future: Unveiling Ferrari’s Latest Innovations and Prestigious Legacy

F14 hours ago

Jacques Villeneuve Slams Sauber’s Lack of Effort as Team Struggles at the Back of the Grid

Sports4 hours ago

Expert Analysis: Sainz Bears Greater Blame in High-Stakes Baku Clash with Perez

F15 hours ago

Helmut Marko Hints at Post-Singapore Decision for VCARB F1 Seat: Ricciardo vs. Lawson Showdown

Sports5 hours ago

Hamilton’s Struggles in Baku: Misbuilt Component to Blame for Poor Mercedes Performance

Sports5 hours ago

Max Verstappen Reflects on Set-Up Missteps at Azerbaijan GP: A Costly Pursuit for Improvement Turns Sour

Politics2 months ago

News Outlet Clears Sacked Welsh Minister in Leak Scandal Amidst Ongoing Political Turmoil

Moto GP4 months ago

Enea Bastianini’s Bold Stand Against MotoGP Penalties Sparks Debate: A Dive into the Controversial Catalan GP Decision

Sports4 months ago

Leclerc Conquers Monaco: Home Victory Breaks Personal Curse and Delivers Emotional Triumph

Moto GP4 months ago

Aleix Espargaro’s Valiant Battle in Catalunya: A Lion’s Heart Against Marc Marquez’s Precision

Moto GP4 months ago

Raul Fernandez Grapples with Rear Tyre Woes Despite Strong Performance at Catalunya MotoGP

Sports4 months ago

Verstappen Identifies Sole Positive Amidst Red Bull’s Monaco Struggles: A Weekend to Reflect and Improve

Moto GP4 months ago

Joan Mir’s Tough Ride in Catalunya: Honda’s New Engine Configuration Fails to Impress

Sports4 months ago

Leclerc Triumphs at Home: 2024 Monaco Grand Prix Round 8 Victory and Highlights

Sports4 months ago

Leclerc’s Monaco Triumph Cuts Verstappen’s Lead: F1 Championship Standings Shakeup After 2024 Monaco GP

Sports4 months ago

Perez Shaken and Surprised: Calls for Penalty After Dramatic Monaco Crash with Magnussen

Sports4 months ago

Gasly Condemns Ocon’s Aggressive Move in Monaco Clash: Team Harmony and Future Strategies at Stake

Business4 months ago

Driving Success: Mastering the Fast Lane of Vehicle Manufacturing, Automotive Sales, and Aftermarket Services

Cars & Concepts2 months ago

Chevrolet Unleashes American Powerhouse: The 2025 Corvette ZR1 with Over 1,000 HP

Business4 months ago

Shifting Gears for Success: Exploring the Future of the Automobile Industry through Vehicle Manufacturing, Sales, and Advanced Technologies

AI4 months ago

Revolutionizing the Future: How Leading AI Innovations Like DaVinci-AI.de and AI-AllCreator.com Are Redefining Industries

Business4 months ago

Driving Success in the Fast Lane: Mastering Market Trends, Technological Innovations, and Strategic Excellence in the Automobile Industry

Tech4 months ago

Driving the Future: Exploring Top Innovations in Automotive Technology for Enhanced Safety, Efficiency, and Connectivity

Mobility Report4 months ago

**”SkyDrive’s Ascent: Suzuki Propels Japan’s Leading eVTOL Hope into the Global Air Mobility Arena”**

V12 AI REVOLUTION COMMING SOON !

Get ready for a groundbreaking shift in the world of artificial intelligence as the V12 AI Revolution is on the horizon

SPORT NEWS

Business NEWS

Advertisement

POLITCS NEWS

Chatten Sie mit uns

Hallo! Wie kann ich Ihnen helfen?

Discover more from Automobilnews News - The first AI News Portal world wide

Subscribe now to keep reading and get access to the full archive.

Continue reading

×