Skip to content

Your AI clone can target your family, but there is a simple defense

    The warning goes beyond voter fraud. The FBI announcement details how criminals are also using AI models to generate persuasive profile photos, identification documents and chatbots embedded in fraudulent websites. These tools automate the creation of deceptive content while reducing previously obvious signs of people behind the scams, such as poor grammar or obviously fake photos.

    Just as we warned in a 2022 piece about life-destroying deepfakes based on publicly available photos, the FBI also recommends limiting public access to recordings of your voice and images online. The agency suggests making social media accounts private and limiting followers to known contacts.

    Origin of the secret word in AI

    As far as we know, we can trace the first appearance of the secret word in the context of modern AI voice synthesis and deepfakes to an AI developer named Asara Near, who first announced the idea on Twitter on March 27, 2023.

    “(I)It may be helpful to establish a 'proof of humanity' word, which your trusted contacts can ask you for,” Near wrote. “In the event they receive a strange and urgent voice or video call from you, this can help assure them they are actually talking to you, and not a deepfaked/deepcloned version of you.”

    Since then, the idea has spread widely. In February, Rachel Metz addressed the topic for Bloomberg, writing, “The idea is becoming increasingly common in the AI ​​research community, one of the founders told me. It's also simple and free.”

    Of course, passwords have been used to verify someone's identity since ancient times, and it seems likely that some science fiction story has dealt with the issue of passwords and robot clones in the past. It's interesting that in this new era of high-tech AI identity fraud, this age-old invention – a special word or phrase that few know about – can still prove to be so useful.