In a time when bad actors can easily access AI, ML, and LLM tools for malicious intent, organizations will need to take extra steps to protect themselves from targeted email attacks. Here are some of the most frequently asked questions our CEO, DJ Sampath, gets about AI and LLMs like GPT and his thoughts on this technology to date, where he believes it’s headed next, and how it will change cybersecurity forever.

ChatGPT pushes the frontiers of what can be done by businesses. Tasks that previously took hours of manual work (and money) can now be done in minutes and at no cost. At the same time, we at Armorblox have been vocal about the serious security implications this technology can have in the hands of bad actors. Now, bad actors don’t need to be fluent in English to have ChatGPT draft all the messaging they need to execute targeted email attacks. This can range from financial fraud, credential phishing, vendor fraud, ransomware, and more.
In a time when bad actors can easily access AI, ML, and LLM tools for malicious intent, organizations will need to take extra steps to protect themselves from these targeted attacks. For more information on this, check out my Co-founder, Anand Raghavan’s previous post on Protecting Critical Business Workflows in the Age of ChatGPT.
Almost 6 years ago, my co-founders and I started Armorblox to use language as a new signal to protect organizations against targeted email attacks. GPT 1.0 had just been released and it was becoming clear that technology like this in the hands of attackers could make email security a much more intractable problem. Fast forward 5 years and LLMs have improved at a startling rate. Below are some of the most frequently asked questions I get about AI and LLMs like GPT. I’ve shared my thoughts on this technology to date, where I believe it’s headed next, and how it will change cybersecurity forever.
How can LLM models be used to enhance cybersecurity measures, such as detecting and preventing cyber attacks?
AI and LLM models are incredibly powerful tools when it comes to enhancing cybersecurity measures. At Armorblox, we use AI and LLM models to analyze emails in real time, detecting and preventing cyber attacks before they have a chance to cause harm. By leveraging the latest in machine learning, deep learning algorithms, data science approaches, and large language models, like GPT, we can actually understand the content and context of communications. With technology like this, we're able to stay one step ahead of even the most sophisticated cyber threats.
What are some potential risks and challenges associated with using AI and LLM in cybersecurity?
When it comes to using AI and LLM in cybersecurity, there are definitely some potential risks and challenges that need to be addressed. Adversarial attacks refer to a type of cyberattack that involves intentionally deceiving an AI model by feeding it data that has been subtly manipulated. On the other hand, bias in the data used to train models can result in AI systems that perpetuate societal injustices and discrimination. Adversarial attacks and bias in training data are certainly concerns that must be addressed and resolved. But at the end of the day, the benefits of using AI and LLM in cybersecurity far outweigh the risks at this point in time. The truth of the matter is that this technology is here to stay, and bad actors won’t hesitate to use it against organizations in an attempt to compromise sensitive data and money. Therefore, the earlier we can adopt the same technology and use it in our defense, the better.
How can AI and LLM be used to improve incident response and mitigation in the event of a cybersecurity breach or attack?
Incident response and mitigation are crucial aspects of cybersecurity, and AI and LLM can play a critical role in this process. By analyzing data in real time, we can identify and respond to threats faster than ever, minimizing the damage caused by cyber attacks. Armorblox is able to automate tasks that have been slowing down security teams, such as remediating user-reported email threats. In addition, our 2023 Email Security Threat Report found that 27 hours a week are wasted on remediating Graymail. This is where AI and GPT are excellent in enabling security teams to function at a much higher ability, reaching new levels of focus and productivity.
How can AI and LLM be used to improve user authentication and access control in cybersecurity?
User authentication and access control are critical components of cybersecurity, and AI and LLM can play a key role in improving these processes. By analyzing user behavior and identifying patterns that could indicate an attack, we're able to detect and prevent unauthorized access to sensitive systems and data. And because our systems are constantly learning, they're able to adapt to new threats and stay one step ahead of cybercriminals. For example, Armorblox is able to stop account takeovers that frequently get past legacy security controls because our advanced algorithms analyze thousands of signals to stop cybercriminals like recognizing impossible travel, unusual mail patterns, suspicious mail forwarding rules and more.
Who has the “home field advantage” in the battle of AI vs. AI, the attacker or the defender?
The good news is that OpenAI also makes GPT software available to the good guys and gals who are protecting organizations against attacks like this, through their GPT-1, GPT-2, and GPT-3 releases. It’s one thing for an AI bot to be able to create a grammatically correct paragraph for attackers, but generalized AI models don't take into account the unique contextual characteristics inherent to each organization—the parlance, the shorthand, the cultural tone, the informal and formal communications channels—and that’s where AI in Natural Language Understanding (NLU) for cybersecurity has the advantage.
To fight “fire with fire,” AI in cyberdefense is built on similar generalized large-language models as seen in ChatGPT, but further evolved with customized, pre-trained AI models bespoke to each organization, essentially arming every organization with their very own frontline AI defender. This creates a home field advantage for the AI platforms in corporate cybersecurity because they learn to understand what is and is not normal behavior for their organization and finetune themselves accordingly.
It is now officially “game on” between attackers and defenders and more important than ever to ensure that the solutions organizations use to protect themselves have at least the same level of intelligence or maturity as the ones attackers have free access to. Language-based techniques for protecting against targeted attacks become more pertinent as the adoption of AI for launching attacks continues to increase.