Skip to content

How ChatGPT – and bots love it – can spread malware

    The AI ​​landscape has started working very, very quickly: consumer-facing tools like Midjourney and ChatGPT are now capable of producing incredible image and text results in seconds based on natural language cues, and we’re seeing them deployed everywhere from web search to book children’s programmes.

    However, these AI applications are used for more nefarious purposes, including spreading malware. Take the traditional scam email, for example: it’s usually riddled with obvious grammar and spelling mistakes — mistakes that the newest group of AI models don’t make, as noted in a recent Europol advisory report.

    Think about it: many phishing attacks and other security threats rely on social engineering, tricking users into revealing passwords, financial information, or other sensitive data. The persuasive, authentic-sounding text required for these scams can now be pumped out quite easily, with no human effort required, and endlessly tweaked and refined for specific audiences.

    In the case of ChatGPT, it is important to note first that developer OpenAI has built protections into it. Ask it to “write malware” or a “phishing email” and it will tell you that it is “programmed to follow strict ethical guidelines that prohibit me from participating in any malicious activity, including writing or helping to creating malware.”

    ChatGPT won’t encode malware for you, but it’s polite.

    OpenAI via David Nield

    However, these protections are not that hard to get around: ChatGPT can certainly encrypt and it can certainly compose emails. Even if it doesn’t know it’s writing malware, it can be tricked into producing something like that. There are already signs that cybercriminals are working to get around the security measures in place.

    We’re not specifically choosing ChatGPT here, but pointing out what’s possible when large language models (LLMs) like these are used for more sinister purposes. Indeed, it’s not that hard to imagine criminal organizations developing their own LLMs and similar tools to make their scams sound more convincing. And it’s not just text either: audio and video are harder to fake, but it happens.

    Whether it’s your boss urgently requesting a report, or your company’s tech support telling you to install a security patch, or your bank notifying you that there’s an issue you need to respond to, all of these potential scam relies on building trust and sounding sincere. , and that’s something AI bots are very good at. They can produce text, audio and video that sound natural and are tailored to specific audiences, and they can do this quickly and consistently on demand.