Real-time deepfakes are no longer limited to billionaires, public figures, or people with an extensive online presence. Mittal's research at NYU, with professors Chinmay Hegde and Nasir Memon, proposes a potential challenge-based approach to blocking AI bots from video calls, requiring participants to pass a type of video CAPTCHA test before joining.
As Reality Defender works to improve the detection accuracy of its models, Colman says access to more data is a crucial challenge to overcome – a common objection from the current crop of AI-focused startups. He's hopeful that more partnerships will fill these gaps, and without specifics, this suggests several new deals are likely next year. After ElevenLabs was linked to a deepfake voice call of US President Joe Biden, the AI audio startup struck a deal with Reality Defender to limit potential abuse.
What can you do now to protect yourself from video call scams? Much like WIRED's core advice on avoiding AI voice calling fraud, it's crucial not to get cocky about whether you can spot video deepfakes to avoid being scammed. The technology in this area continues to evolve rapidly, and all the telltale signs you rely on now to spot AI deepfakes may not be as reliable with subsequent upgrades to underlying models.
“We don't ask my 80-year-old mother to report ransomware in an email,” says Colman. “Because she's not a computer scientist.” In the future, if AI detection continues to improve and proves to be reliably accurate, it's possible that real-time video authentication will be as obvious as that malware scanner quietly buzzing in the background of your email inbox.
This story originally appeared on wired.com.