AI
Combating the Deepfake Threat: How GetReal Labs Aims to Outsmart AI Fraudsters
To go back to this article, navigate to My Profile and then click on View saved stories.
Knight Will
Deepfakes Are Advancing. This Firm Aims to Detect Every One
Numerous Fortune 500 firms are trialing a program capable of identifying a deepfake impersonating an actual individual during a real-time video conference, in response to a series of cons involving deceitful job applicants who abscond with a signing bonus.
The technology for identifying forgeries is provided by GetReal Labs, an emerging enterprise established by Hany Farid, a distinguished UC-Berkeley academic recognized for his expertise in deepfakes and the alteration of images and videos.
GetReal Labs has created a comprehensive set of instruments designed to identify whether images, audio, and video have been created or altered using artificial intelligence or by hand. This firm's technology is capable of examining a face during a video conversation and detecting signs that suggest it may have been synthetically produced and superimposed onto a real individual's body.
"Farid notes that these incidents are not just theoretical; there's an increasing number of reports about them. It appears that in certain instances, the objective is to steal intellectual property by penetrating the company. Meanwhile, in other scenarios, the motivation seems to be strictly monetary, with the perpetrators simply absconding with the initial payment."
In 2022, the Federal Bureau of Investigation alerted the public to the emergence of counterfeit job applicants utilizing deepfake technology to impersonate actual individuals in video interviews. Arup, a design and engineering company headquartered in the UK, was defrauded of $25 million by a scam artist pretending to be the organization's Chief Financial Officer using deepfake technology. Additionally, con artists engaged in romantic frauds have started using this technology to deceive people into parting with their money.
Pretending to be an actual individual during a live video stream represents merely one instance of the astonishing deception achievable with AI technology. Advanced language algorithms can accurately replicate a person's manner of speaking in digital conversations, and brief video clips can now be created using platforms such as OpenAI's Sora. The rapid progress of AI technology in recent times has significantly enhanced the believability and availability of deepfake content. Open-source programs allow for the refinement of deepfake techniques, and readily available AI solutions can transform written descriptions into lifelike images and videos.
However, mimicking someone through live video represents a novel challenge. To construct such a deepfake, it often requires combining machine learning with face-tracking technology to flawlessly merge a counterfeit face with an authentic one. This enables an intruder to manipulate the actions and speech of the fabricated likeness on camera.
Farid presented a demonstration of GetReal Labs' technology to WIRED. Demonstrating with an image of a corporate boardroom, the program examines the image's metadata for indicators of alteration. Prominent AI corporations like OpenAI, Google, and Meta now embed digital markers in images they generate through AI, offering a reliable method to identify them as fabricated. Nevertheless, not all platforms mark their images this way, and open-source generators can be adjusted to omit these markers. Moreover, it's relatively simple to tamper with metadata.
Authored by Kelly
Authored by Jaina Grey
Crafted by David
Authored by Kate Knibbs
GetReal Labs employs a variety of artificial intelligence models, which are specifically trained to differentiate authentic visuals from manipulated ones, allowing for the identification of probable fabrications. Additionally, they utilize a combination of AI and conventional forensic methods to enable users to closely examine photos for any inconsistencies in visual and physical elements. This includes identifying irregularities such as shadows that diverge in direction despite originating from a single light source, or shadows that don't align with the objects casting them.
When observing lines on various objects depicted in perspective, one can determine if they meet at a single vanishing point, mirroring what occurs in an actual visual scene.
Farid argues that while many emerging companies pledge to detect deepfakes through extensive use of artificial intelligence, the inclusion of human-led forensic examination will be key in identifying altered media. "Anyone claiming that training an AI model alone will solve this issue is either misguided or dishonest," he states.
Authored by Kelly
I'm sorry, but you
Authored by David
Authored by Kate Knibbs
The necessity for vigilance is not just limited to the biggest corporations. In the sphere of politics, where the manipulation of media through deepfakes represents a significant issue, there's hope that the technology developed by Farid's enterprise could make a positive difference. The WIRED Elections Project monitors the use of deepfakes aimed at either supporting or undermining political figures in various countries including India, Indonesia, and South Africa. In the U.S., a fraudulent robocall mimicking Joe Biden was used in early January to discourage voter turnout for the New Hampshire Presidential primary. Recently, videos known as “cheapfakes” have spread widely online due to their deceptive editing, and a Russian propaganda team has been caught distributing a video altered by AI to cast Joe Biden in a negative light.
Vincent Conitzer, a computer science expert based at Carnegie Mellon University in Pittsburgh and one of the authors behind the publication Moral AI, anticipates an increase in both the prevalence and maliciousness of AI-generated falsehoods. According to him, this trend suggests a rising need for technologies aimed at combating these deceptions.
Conitzer describes it as a competitive struggle, noting, "Currently, you might possess a tool that's highly efficient at identifying deepfakes, but there's no assurance it will remain capable against future versions. In fact, a detector that works well now could inadvertently help improve deepfakes by teaching them how to bypass detection."
GetReal Labs acknowledges the ongoing challenge of staying ahead in the fight against deepfake technology. Ted Schlein, who helped found GetReal Labs and has extensive experience in computer security, believes it's only a matter of time before deepfake fraud becomes a common issue for most people. This is due to cybercriminals becoming increasingly skilled at using this technology to create sophisticated scams. He also points out that altered media is now a major worry for numerous chief security officers, describing disinformation as the modern equivalent of malware.
Farid highlights that the manipulation of media poses a considerable threat, suggesting it's a more complex issue to tackle. He explains, "I have the option to restart my computer or purchase a new one. However, the corruption of human thought poses a fundamental danger to our democratic system."
You May Also Enjoy …
Directly to your email: Receive Plaintext—Steven Levy's in-depth perspective on technology
Welcome to the chaotic world of programmatic advertising.
What is the required number of electric vehicle (EV) charging points to supplant petrol stations in the US
A charitable organization aimed to reform technology culture, yet found itself unable to manage its own operations.
Eternal sunshine: Discover the top sunglasses for every escapade
Knight Will
Amanda Hoover
Knight Will
Steve Levy
Knight Will
Knight Will
The name provided is Luca Zorloni
Steven Levy
Additional Content from WIRED
Evaluations and Tutorials
© 2024 Condé Nast. All rights reserved. When you buy products via links on our site, WIRED might receive a share of the revenue through our affiliate agreements with retail partners. The content on this website is protected and cannot be copied, shared, distributed, or used in any manner without explicit written consent from Condé Nast. Advertisement Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.