ChatGPT, an AI-generated chatbot, presents cybercriminals and scammers with a free and easy-to-use tool for generating socially engineered attacks such as financial fraud, credential phishing, vendor fraud, and more. Learn how to protect your employees and organization from these types of attacks and data loss.
When we started Armorblox in 2017, GPT-1 had just been released and that was one of the inspirations for us to look into applying pre-trained models, transformers, and other language-based approaches toward detecting and protecting critical business workflows. Today, we use these techniques among others to protect four critical business workflows that attackers attempt to compromise — ones that involve money, credentials, sensitive data, and confidential data.
During our investor pitches, we would warn that advances in technology would make it easier and easier for attackers to utilize language engines to generate phishing emails to attack organizations and their employees. This would remove the hindrances around attackers not being native English speakers and their emails often having grammatical errors and spelling mistakes. It would also make it easier for them to execute targeted email attacks on a mass scale. Mass customization makes the leap from fast fashion to cybersecurity.
What is ChatGPT?
Enter stage left, ChatGPT. ChatGPT is an artificial intelligence (AI) powered chatbot that creates text in response to user prompts. Its replies are surprisingly intelligent, well-written, and accurate. Its abilities range from writing songs and poetry to answering technical questions and more.
This topic has revolutionized Tech Twitter for the past few weeks and has already had its fair share of passionate articles written about it - either loving it or hating it. But it is hard to sit on the sidelines and pretend that ChatGPT did not happen. If you have ten minutes to spare, stop reading now, go to https://chat.openai.com/chat, create an account and ask it some questions. You will be amazed how the next few hours disappear.
While ChatGPT has in effect become the “new shiny object” on Twitter and other social media–just as Stable Diffusion, Dream Booth, and Lensa have in the past–the real-world consequences in cybersecurity are far too real.
ChatGPT allows attackers to quickly acquire messaging needed for attacks ranging from targeted phishing, social engineering, executive impersonation, financial fraud, and more. Here are just a few examples of the types of well-crafted and convincing messages this platform allows attackers to get their hands on at faster speeds and zero cost.
Credential Phishing Attack
An email that could be used in a brand impersonation and credential phishing attack.
A wire transfer fraud email that could be used to target wealth management firms.
Vendor Fraud Attack
An email that could be used in a vendor fraud attack to target industry verticals and products/services supplied.
Request for Sensitive Information
An email that was generated to impersonate a manager in order to steal sensitive information about employees.
Payroll Fraud Attack
A quickly generated request for payroll checks to be deposited to a different bank account. This one even tells the reader how their weekend was.
This goes on and on. Yes, ChatGPT does flag inappropriate questions and answers as such, but it also allows any actor to set up accounts and generate as many of these emails as they want.
How to Protect Critical Business Workflows
In a time when bad actors can easily access AI, ML, and NLU tools for malicious intent, how do organizations protect themselves from these targeted attacks? It will come down to the tools organizations use to protect themselves. Products that offer protection against compromised business workflows should meet these requirements:
- They need to be built with the same state-of-the-art techniques that are at least as good as what the attackers have access to. The legacy way of looking for bad emails just based on headers and exception/block lists will not work.
- They need to protect all four categories of sensitive business workflows. Point solutions that only protect against some targeted email attacks but do nothing to prevent exfiltration of data, are partial at best and do not learn from attack patterns and threat actors across all four workflow categories. This becomes increasingly relevant when the same platform can learn and protect against all four kinds of attacks.
- They need a broad set of algorithms to protect against these modern attacks. It is not sufficient to just look for anomalous behavioral patterns in communications or other statistical signals. They need strong deep learning algorithms, machine learning models, data science approaches, and natural language-based techniques. They need to be able to clearly highlight which workflows were compromised, and provide you statistics on which workflows are the most vulnerable. That’s when you know that they are built with the premise of workflow compromise detection, not just detecting one bad email in isolation.
The ChatGPT chapter has just begun. More will be written and spoken about regarding regulation, compliance, detecting bot-generated essays, preventing bad actor access, open-sourcing these libraries, and offering it over APIs for companies to use. We have just scratched the surface.
These are exciting times in the world of cybersecurity in general, and securing communications in particular. Never has it been more important to educate and empower your employees from being compromised by attackers to steal valuable information, credentials, or money from your organizations.
Protect Your Business From Email Attacks With Armorblox
It’s not enough to invest in security awareness training when preparing cyber defenses for these types of targeted attacks. Gaps in security controls aid attackers in deploying socially engineered attacks on your end users and organization more effectively.
With advanced technologies like machine learning and natural language understanding, Armorblox enables your company to be well-positioned against targeted attacks.