AI
Google Intensifies Battle Against Explicit Deepfakes, Slashing Access by 70%
To look back at this article, go to My Profile and then click on View saved stories.
Google Implements Stringent Measures Against Sexual Deepfakes
Recently, a search on Google using the keywords “deepfake nudes jennifer aniston” displayed several top results claiming to possess explicit, artificial intelligence-created images of the celebrity. These results have now disappeared.
Emma Higham, a product manager at Google, has announced that recent updates to the search engine's ranking methods, implemented throughout this year, have successfully reduced the visibility of counterfeit explicit images by more than 70% in searches related to individuals. Instead of showing problematic content, Google's algorithms now prioritize news stories and suitable non-explicit material. For instance, searching for Aniston now yields results like articles discussing “How Taylor Swift's Deepfake AI Porn Represents a Threat” and warnings from the Ohio attorney general regarding “deepfake celebrity-endorsement scams” aimed at deceiving consumers.
"In his blog post on Wednesday, Higham explained that thanks to these adjustments, individuals will now be able to learn about the effects of deepfakes on the community without being exposed to pages displaying unauthorized counterfeit images."
The adjustment in rankings comes after an investigation by WIRED this month uncovered that, over the past few years, Google's leadership has turned down multiple suggestions from employees and external consultants aimed at addressing the escalating issue of private images of individuals being shared online without their consent.
Google has facilitated the process of requesting the deletion of undesired explicit material, yet those affected and their supporters have called for more preemptive measures. However, Google has been cautious not to overstep its bounds by excessively policing the web or restricting access to lawful adult content. A representative from Google mentioned that various groups within the company are actively enhancing protections against what they refer to as nonconsensual explicit imagery (NCEI).
The increased access to AI tools capable of generating images, especially those with minimal limitations, has resulted in a rise in the creation of non-consensual explicit imagery (NCEI), as reported by those supporting victims. These technologies have simplified the process for virtually anyone to fabricate explicit content featuring anyone from a fellow middle school student to a world-famous star.
In an investigation conducted by WIRED in March, it was discovered that Google had been requested over 13,000 times to delete links to twelve of the leading websites that distribute highly explicit deepfake content. Google complied by eliminating the links in approximately 82 percent of these instances.
In a recent initiative by Google, Higham outlines the company's plan to extend its efforts to diminish the visibility of genuine but objectionable explicit content to include artificial and undesirable explicit images. Following the approval of a removal request for a sexualized deepfake, Google will work to prevent copies from appearing in search outcomes. Additionally, the company will exclude explicit content from search results for queries that resemble the ones mentioned in the removal request. Moreover, websites that have experienced "a high volume" of approved removal requests will see a decrease in their ranking in search results.
Higham stated that these measures aim to provide individuals with increased reassurance, particularly for those worried about similar content involving them surfacing later on.
Google has admitted that their safeguards are not completely effective, and both former employees and advocates for victims argue that the company could do significantly more. In the United States, individuals searching for child pornography are met with a clear warning from the search engine, stating that the content is illegal. The impact of this warning is uncertain, but it is seen as a possible preventative measure by supporters. However, despite existing laws against distributing Non-Consensual Explicit Imagery (NCEI), no similar warnings are displayed for searches related to sexually explicit deepfakes of adults. A representative for Google has stated that there are no plans to introduce such warnings.
Recommended for You …
Delivered directly to your inbox: A selection of our most significant stories, curated daily just for you.
A faulty update from CrowdStrike caused a global computer meltdown
The Major Headline: When could the Atlantic Ocean possibly split apart?
Introducing the age of excessive online consumption
Olympics: Stay updated with our complete coverage from Paris this season right here.
Additional Content from WIRED
Critiques and Tutorials
© 2024 Condé Nast. All rights reserved. WIRED may receive a share of revenue from items bought via our website, which is part of our Affiliate Partnerships with retail companies. The content on this website is protected and cannot be copied, shared, broadcast, stored, or used in any form without explicit written consent from Condé Nast. Advertising Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.