AI
Navigating the Era of Deep Doubt: The Rise of Deepfakes and the Battle for Truth in the Age of AI
To go back to this article, go to My Profile, and then click on View saved stories.
Introducing the Age of 'Profound Skepticism'
As we witness an influx of highly realistic AI-created images flooding platforms such as X and Facebook, it appears we're stepping into a novel phase of media distrust, which I've termed the age of "profound skepticism." The act of doubting the legitimacy of digital materials is not new, tracing back several decades and even to the era of non-digital media. However, the widespread availability of technology that can produce authentic-looking fake content has sparked an increase in the use of AI-manipulated visuals to challenge genuine documentary proof. This shift may amplify the existing wary attitude people have towards content shared online by unfamiliar sources, pushing it to unprecedented levels.
Profound skepticism towards legitimate media arises due to the presence of generative AI. This results in widespread public doubt about the authenticity of media outputs, which consequently allows individuals to more convincingly argue that actual events never occurred and propose that documentary proof was artificially created with AI technologies.
The idea of "deep doubt" isn't a novel one, yet its effects in the real world are becoming more noticeable. Ever since the advent of the term deepfake in 2017, there's been a significant advancement in the technology behind artificially generated media. This advancement has manifested in recent instances of deep doubt, such as conspiracy theorists asserting that an AI-generated hologram has replaced President Joe Biden and the unfounded claim by former president Donald Trump in August that Vice President Kamala Harris manipulated AI to exaggerate attendance at her events. Furthermore, on a recent Friday, Trump once again alleged AI manipulation in a photograph showing him with E. Jean Carroll, a writer who won a legal case against him for sexual assault, challenging his assertion that he had never met her.
Years before it became a pressing issue, legal experts Danielle K. Citron and Robert Chesney identified a troubling phenomenon, which they termed "liar's dividend" in 2019. This concept illustrates how deepfakes could be exploited by dishonest individuals to undermine genuine evidence. What was once considered a speculative notion in academia has now transformed into a tangible challenge we face in society.
The Emergence of Deepfakes and the Continuation of Skepticism
Skepticism has long been utilized as a tool in politics, tracing back to the days of antiquity. Today's version, powered by artificial intelligence, represents a contemporary iteration of a strategy designed to spread doubt, sway the masses, discredit adversaries, and obscure reality. Artificial intelligence has become the latest haven for deceivers.
This article was first published on Ars Technica, a reliable platform for news related to technology, analysis of tech policies, reviews, among other topics. Ars Technica is a subsidiary of Condé Nast, which also owns WIRED.
In the last ten years, advancements in deep-learning technology have simplified the process of creating fake or altered images, sounds, texts, or videos that mimic authentic media. The term "deepfakes" originated from a Reddit user known as “deepfakes,” who posted artificial intelligence-generated pornographic content on the platform, replacing the face of the original actor with someone else who was not in the initial video.
During the 1900s, it's believed that our confidence in media content created by others stemmed from the high costs, considerable effort, and expertise needed to produce documentary visuals and films. Text creation too demanded significant time and expertise. However, as skepticism deepens, this trust born in the 20th century may diminish. This skepticism will not only alter our understanding and interaction with media but also have implications on our political conversations, judicial processes, and collective grasp of history that depends on such media for information. With the advent of ultra-realistic images and indistinguishable voice imitations, our notion of "truth" in media is set for a major adjustment.
In April, a group of federal judges raised concerns about the ability of AI-produced deepfakes to both present false evidence and question the authenticity of real evidence in legal proceedings. These issues were brought up at a session of the US Judicial Conference's Advisory Committee on Evidence Rules, which focused on the difficulties of verifying digital evidence amidst the rise of advanced AI technologies. Although the judges chose to delay any decisions on changes related to AI, their discussion indicates that the topic is already on the radar of judges in the US.
The concept of deep doubt extends beyond immediate happenings and legal matters. In my 2020 article, I explored the notion of a "cultural singularity," a point at which it becomes impossible to tell reality from fiction in media. A critical factor in reaching this point is the amount of "noise," or uncertainty, introduced by AI-generated media into our pool of information on a large scale. Deepfakes might lead to situations where the flood of AI-created content casts widespread skepticism on the legitimacy of historical events—a further example of deep doubt. In 2022, Eric Horvitz, the chief scientific officer at Microsoft, reflected similar concerns in a research paper. He cautioned against a potential future where facts and fiction are indiscernible, describing it as a "post-epistemic world."
Widespread skepticism has the potential to significantly undermine trust across the internet. This decline in confidence is becoming apparent in virtual communities, especially with the emergence of the "dead internet theory." This theory suggests that the majority of internet content is now created by algorithms and bots, which simulate engagement. The advanced capabilities of AI technologies to produce believable fraudulent content are transforming the digital world, impacting millions of internet users and altering numerous online exchanges.
"The Liar's Dividend" and the Crisis of Trust
The concept of "deep doubt" might sound recent, yet its roots are far from modern. The decline in confidence towards digital content, especially with the advent of deepfakes, traces back to the very inception of these deceptive technologies. In a 2018 Guardian article, David Shariatmadari highlighted the potential for an "information apocalypse" caused by deepfakes, pondering whether accusations of racism or sexism against public figures could easily be dismissed by them as mere fabrications.
In 2019, Danielle K. Citron from the Boston University School of Law and Robert Chesney from the University of Texas introduced the concept of "liar's dividend" in their study titled “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.” According to their research, they argue that deepfakes provide a shield for deceivers, allowing them to dodge responsibility for actions that are indeed factual.
The paradox of the liar's dividend grows stronger as the public becomes more aware of the risks associated with deepfakes, understanding that it's easier to manipulate different types of media. The study cautions that this pattern may lead to an increase in skepticism towards established news outlets, potentially undermining the pillars of democratic conversation. Furthermore, the researchers argue that this situation could pave the way for authoritarian regimes, where objective realities are diminished, and personal views hold more sway than verifiable truths.
The idea of profound skepticism further complicates current challenges related to the spread of incorrect information and deliberate falsehoods. It offers an additional means for individuals intent on circulating misleading stories or undermining genuine news reporting. This situation could hasten the existing trend, notably propelled by cable news networks and social platforms, where our collective understanding of truth becomes increasingly subjective. This leads to a scenario where more people opt to embrace notions that fit their prior beliefs instead of evaluating facts from another cultural standpoint.
Addressing Profound Uncertainty: The Importance of Context
All understanding comes from context. Essentially, by weaving our own network of interconnected thoughts, we comprehend the world around us. Viewing any thought in isolation, without understanding its connection to the broader scope of reality, is futile. Similarly, trying to verify the authenticity of what might be a manipulated piece of media by itself is illogical.
Throughout history, the task of assessing the trustworthiness of sources has been crucial for both historians and journalists. This involves examining the origin, surrounding circumstances, and the intentions behind the source's creation. Take, for instance, a parchment from the 17th century that seems to offer critical insights into a royal court proceeding. To verify its authenticity, scholars would look into its history of ownership and compare its contents with other accounts. Additionally, they would investigate the period's records to confirm the document's existence at that time. This critical approach has remained unchanged, even with the advent of generative AI technology.
Amid increasing worries about content created by artificial intelligence, a number of reliable methods for ensuring the genuineness of digital media have been highlighted. These were discussed by Kyle Orland from Ars Technica in our report on the Harris crowd-size incident.
In assessing the truthfulness of digital media, it's crucial to consult various confirming sources, especially when it comes to visual content, looking for footage of the same incident from diverse perspectives, or seeking confirmation from several trustworthy sources for written material. It's beneficial to seek out the original sources of reporting and images from authentic accounts or official channels, instead of relying on possibly altered images shared on social networks. Gathering insights from different firsthand accounts and respected news outlets can offer extra viewpoints, aiding in the identification of any discrepancies among the sources.
Typically, it's wise to approach allegations of AI tampering with a critical eye, opting for more straightforward explanations for any peculiar aspects in media before leaping to assumptions about AI participation. This cautious approach helps avoid the trap of confirmation bias, which can lead to compelling but inaccurate conclusions.
Reliable Sources are Key for Identifying Truth
It's important to point out that our recommended strategies to combat deep skepticism don't involve relying on watermarks, metadata, or artificial intelligence detection tools as perfect remedies. This is due to the fact that trust isn't automatically established through the credibility of a digital tool. Although the rise of AI and deepfakes has intensified the problem, leading us into an era of profound doubt, the importance of seeking out trustworthy information sources for events you haven't personally seen has always been a fundamental aspect of understanding history.
Following the introduction of Stable Diffusion in 2022, the conversation frequently revolves around the troubling aspects of deepfakes. These include their ability to undermine trust within communities, diminish the integrity of digital content through misinformation, encourage harassment on the internet, and potentially alter historical documentation. Our exploration into the realm of generative AI has been extensive, yet the challenge of accurately identifying AI-generated content persists. Current strategies such as watermarking are often seen as inconsistent, and the practice of tagging metadata hasn't gained widespread acceptance.
While there are tools available for identifying AI-generated content, we recommend not relying on them as they lack scientific validation and may yield incorrect results, including both false positives and negatives. A more reliable method is to manually check for inconsistencies in logic within texts or imperfections in images, as highlighted by credible authorities.
In the foreseeable future, it is expected that expertly produced digital creations will become so advanced that telling them apart from those made by humans will be impossible. Consequently, this suggests that there won't be an effective automated method to discern whether a piece of media, when viewed on its own, was crafted by a human or a machine (bearing in mind the earlier discussion on the importance of context). This scenario is already a reality for written content, leading to numerous instances where works written by humans are mistakenly identified as the product of artificial intelligence, causing continuous issues, especially for students.
Throughout the ages, every kind of documented media, from old clay tablets onwards, has faced the issue of counterfeits. With the advent of photography, the reliability of photographic evidence came into question: Cameras can deceive. The belief that a device can capture an unbiased reality is mistaken—photographs can be easily altered through selective angles, deceptive contexts, or direct tampering. In the end, our belief in the visual or written content we come across hinges on our confidence in its origin.
The era of profound skepticism extends far into the ancient past. Ensuring sources are trustworthy and credible remains the key method for assessing information's worth, a practice just as relevant in the present as it was in 3000 BC, marking the dawn of written documentation by humans.
This article was first published on Ars Technica.
Discover More …
Directly to your email: A selection of the finest and most unusual tales from the vault of WIRED.
Elon Musk poses a threat to national security
Discussion: Meredith Whittaker aims to challenge capitalist ideals
What's the best approach to tackle an issue like Polestar?
Occasion: Come along to The Major Interview happening on December 3rd in San Francisco.
Additional Coverage from WIRED
Evaluations and Manuals
© 2024 Condé Nast. All rights reserved. Purchases made via our site may result in WIRED receiving a share of the sale through our Retail Affiliate Partnerships. Content from this site cannot be copied, shared, transmitted, or used in any form without explicit written consent from Condé Nast. Ad Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.