AI
Unmasking the Future: How Reality Defender’s AI Battles the Surge of Real-Time Deepfake Scams
To look over this article again, go to My Profile, and then find the saved stories section.
Live Deepfake Video Frauds Have Arrived. This Solution Aims to Eliminate Them
Christopher Ren delivers an impressive imitation of Elon Musk.
Ren holds the position of product manager at Reality Defender, a firm dedicated to developing solutions to fight against AI-generated misinformation. In a video conference observed by me last week, Ren cleverly utilized a widely shared piece of code from GitHub and a single photograph to create a basic deepfake of Elon Musk, overlaying it onto his own facial features. This act was meant to showcase how the company's newly developed tool for detecting AI manipulations operates. As Ren pretended to be Musk during our video conversation, snapshots from our call were being sent in real-time to Reality Defender's specialized analytical model. Concurrently, an indicator provided by the company on our screen notified me that I was probably viewing a deepfake produced by AI, rather than the actual Elon Musk.
Indeed, I had my doubts about actually being in a video call with Musk, and it seemed the presentation was tailored to showcase the nascent capabilities of Reality Defender in a flattering light. Yet, the underlying issue is authentic. The menace of real-time video deepfakes is escalating, affecting governments, corporations, and private individuals alike. In a notable incident, the head of the US Senate Committee on Foreign Relations was duped by a fake video call from someone posing as a Ukrainian official. Earlier this year, an international engineering firm suffered a multimillion-dollar loss when a staff member fell for a deepfake video scam. Furthermore, romance frauds leveraging similar technologies have been conning people across the board.
Ben Colman, CEO and cofounder of Reality Defender, predicts that in just a few months, we'll witness a surge in deepfake videos and direct scams. He emphasizes that particularly during important video calls, visual evidence should not automatically be considered trustworthy.
The company is highly committed to collaborating with both commercial and governmental entities to combat the rise of deepfakes powered by artificial intelligence. Despite this primary focus, Colman is keen on ensuring that his firm is not perceived as opposing AI advancements in general. "We are strong supporters of AI," he states. "We believe that virtually all applications can revolutionize sectors like healthcare, enhance efficiency, and boost creative endeavors. However, it's in these extremely rare instances that the dangers are significantly severe."
Reality Defender is developing a real-time detection tool, initially launching as a Zoom plug-in, which aims to identify if participants in a video call are genuine or AI-generated fakes. The firm is in the process of rigorously testing this feature to evaluate its effectiveness in distinguishing authentic users from counterfeit ones. However, it appears that the general public will have to wait a while before getting access to this technology. Initially, only a select group of the company's clients will have access to this beta version of the software feature.
The unveiling of plans to identify deepfakes in real-time is not a novel initiative within the tech industry. Last year, Intel introduced its deepfake detection technology, FakeCatcher. This tool is engineered to detect authenticity by examining fluctuations in facial blood circulation to ascertain if the individual in the video is genuine. Similar to others, Intel's solution is not accessible to the general public.
Scholars are exploring various methods to combat this particular type of deepfake danger. "The technology to produce deepfakes has advanced significantly, requiring minimal data," notes Govind Mittal, a doctoral student in computer science at New York University. "With just 10 of my photos from Instagram, anyone could use those. Even ordinary individuals are at risk."
The creation of live deepfakes is no longer an issue exclusive to the wealthy, celebrities, or individuals with significant digital footprints. At New York University, the study conducted by Mittal, alongside professors Chinmay Hegde and Nasir Memon, suggests a solution to prevent AI-generated bots from entering video conferences. This involves a video-based verification process, similar to a CAPTCHA, that users must complete before they can participate in the call.
Colman, representing Reality Defender, emphasizes the importance of expanding their data access to enhance their model's detection capabilities, a challenge echoed by many AI-centric startups today. He remains optimistic about forming new collaborations to address these data shortages and, though he didn't provide details, suggests that several agreements might be on the horizon for the next year. This comes after a scenario where ElevenLabs, an AI-audio company, was linked to a deepfake audio incident involving a fake call from US President Joe Biden, leading to a partnership with Reality Defender aimed at preventing such abuses.
How can you safeguard yourself against scams during video calls at this moment? Following WIRED's fundamental guidance on steering clear of deceit through AI-generated voice calls is essential. It involves maintaining humility about your ability to detect video deepfakes, as this is key in preventing scams. The tech in this domain is advancing swiftly, meaning the clues you presently use to identify AI deepfakes might not remain reliable as the technology and its foundational models progress.
Colman remarks, "We wouldn't expect my 80-year-old mother to identify ransomware in an email," attributing this to her lack of expertise in computer science. He suggests that as AI detection advances and proves to be consistently reliable, the notion of real-time video authentication may become as commonplace and unobtrusive as the malware scanner that operates silently in the background of your email inbox.
You May Also Enjoy…
Delivered directly to your email: A selection of the most fascinating and peculiar tales from the archives of WIRED.
Elon Musk poses a threat to national security
Interview: Meredith Whittaker Aims to Challenge Capitalist Norms
What's the solution for a dilemma such as Polestar?
Occasion: Be our guest at The Major Interview happening on December 3rd in San Francisco.
WIRED PROMO CODES
Live Assistance with Turbo Tax – Save 10%
Deluxe Tax Preparation from H&R Block for Just $55
Amazing Offers on Instacart: Save As Much As $20
Dyson Airwrap promotion: Complimentary $60 Case and a $40 Gift included
Enjoy an Additional Discount of Up to 45% During the October Sale
Get 30% Discount on Purchasing Three or More Items at VistaPrint
Additional Content from WIRED
Evaluations and Instructions
© 2024 Condé Nast. All rights reserved. Purchases made via our website may earn us a commission through our Retail Affiliate Partnerships. Content from this site cannot be copied, shared, transmitted, or used in any form without the explicit written consent of Condé Nast. Advertising Choices
Choose a global website
Discover more from Automobilnews News - The first AI News Portal world wide
Subscribe to get the latest posts sent to your email.