Skip to content

How AI protects (and attacks) your inbox.

    When Aparna Pappu, vice president and general manager of Google Workspace, speaking at Google I/O on May 10, she outlined a vision of artificial intelligence helping users search their inboxes. Pappu showed how generative AI can whisper summaries of long email threads into your ear, extract relevant data from local files as you browse unread messages, and drop you low to the ground while suggesting insertable text. Welcome to the inbox of the future.

    While the details of how it will arrive remain unclear, generative AI is poised to fundamentally change the way people communicate over email. A broader subset of AI called machine learning performs a kind of safety dance long after you’ve logged out. “Machine learning has been a critical part of what we’ve been using to secure Gmail,” Pappu tells WIRED.

    A few erroneous clicks on a suspicious email can wreak havoc on your security, so how does machine learning help fend off phishing attacks? Neil Kumaran, a product leader at Google who focuses on security, explains that machine learning can look at the wording of incoming emails and compare it to previous attacks. It can also flag unusual message patterns and sniff out any weirdness that comes from the metadata.

    Machine learning can do more than flag dangerous messages when they appear. Kumaran points out that it can also be used to track down the people responsible for phishing attacks. He says, “At the time we create an account, we do evaluations. We try to find out, ‘Does it look like this account will be used for malicious purposes?’” In the event of a successful phishing attack on your Google account, AI is also involved in the recovery process, using machine learning to determine which login attempts are legitimate.

    “How do we extrapolate information from user reports to identify attacks we may not know about, or at least model the impact on our users?” asks Kumaran. Google’s answer, like the answer to many questions in 2023, is more AI. This instance of AI isn’t a flirty chatbot that teases you late into the night with long exchanges; it’s a burly bouncer who, with his algorithmic arms folded, kicks out the rioters.

    On the other hand, what else is causing phishing attacks on your email inbox? I’ll give you a guess. First letter “A”, last letter “I”. Security experts have been warning for years about the possibility of AI-generated phishing attacks flooding your inbox. “It is very, very difficult to detect AI with the naked eye, either through the dialect or the URL,” said Patrick Harr, CEO of messaging security company SlashNext. Just like when people use AI-generated images and videos to create reasonably convincing deepfakes, attackers can use AI-generated text to personalize phishing attempts in a way that is difficult for users to detect.

    Multiple companies focusing on email security are working on models and using machine learning techniques to further protect your inbox. “We take the corpus of data that comes in and do what’s called supervised learning,” said Hatem Naguib, CEO of Barracuda Networks, an IT security firm. In supervised learning, someone adds labels to some of the email data. Which messages are likely to be safe? Which are suspicious? This data is extrapolated to help a company flag phishing attacks with machine learning.

    It’s a valuable aspect of phishing detection, but attackers continue to hunt for ways to get around protections. A persistent scam about a fabricated Yeti Cooler giveaway last year evaded the filters with an unexpected type of HTML anchor.

    Cybercriminals will remain determined to hack into your online accounts, especially your work email. Those using generative AI may be able to better translate their phishing attacks into multiple languages, and chatbot-style applications can automate parts of the back-and-forth messages with potential victims.

    Despite all the possible phishing attacks enabled by AI, Aparna Pappu remains optimistic about the continued development of better, more sophisticated security measures. “You’ve reduced the cost of what it takes to potentially lure someone in,” she says. “But on the other hand, thanks to these technologies, we’ve built more detection capabilities.”