Here’s what you need to know to stay safe from cybercriminals using ChatGPT to create deepfake chatbots, phishing campaigns and malware.
The global research team from Norton, a consumer Cyber Safety brand of Gen™ (NASDAQ: GEN), today released its quarterly Consumer Cyber Safety Pulse Report, detailing how cybercriminals can use artificial intelligence to create realistic and sophisticated threats. The latest report includes an analysis of how large language models can enhance cybercriminal tactics.
ChatGPT has captured the internet’s attention with millions using the technology to write poems, craft short stories, answer questions and even ask advice. Meanwhile, cybercriminals are using it to generate malicious threats through its impressive ability to generate human-like text that adapts to different languages and audiences.
Cybercriminals can now quickly and easily craft email or social media phishing lures that are even more convincing, making it more difficult to tell what’s legitimate and what’s a threat. In addition to writing lures, ChatGPT can also generate code. Just as ChatGPT makes developers’ lives easier with its ability to write and translate source code, bad actors too can manipulate the technology and use it to scam at a more larger and a faster scale.
“While the introduction of large language models like ChatGPT is exciting, it’s also important to note how cybercriminals can benefit and use it to conduct various nefarious activities. We’re already seeing ChatGPT being used effectively by bad actors to create malware and other threats quickly and very easily,” said Mark Gorrie, Asia Pacific Managing Director at Gen. “Unfortunately, it’s becoming more difficult than ever for people to spot scams on their own, which is why Cyber Safety solutions that look at all aspects of our digital lives are comprehensibly needed, be it our mobile devices to our online identity, or the wellbeing of those around us – being cyber vigilant is integral to our digital lives.”
In addition to using ChatGPT for more efficient phishing, Norton experts warn bad actors can also use it to create deepfake chatbots. These chatbots can impersonate humans or legitimate sources, like a bank or government entity, to manipulate victims into turning over their personal information in order to gain access to sensitive information, steal money or commit fraud.
To stay safe from these new threats, Norton experts recommend:
- Avoiding chatbots that don’t appear on a company’s website or app and being cautious of providing any personal information to someone you’re chatting with online.
- Thinking before you click on links in response to unsolicited phone calls, emails or messages.
- Updating your security solutions and making sure it has a full set of security layers that go beyond known malware recognition, such as behavior detection and blocking.