I’m often asked, “What is Intent-Based Detection?” My co-founder, Abhishek Singh, has written research papers and spoken at conferences extensively on this topic, but I wanted to break it down in a way that everyone—whether technical or not—can understand.
Traditional email security tools often look for specific words or phrases—like “invoice,” “password reset,” or “urgent request”—to flag potential threats. But attackers (scaled with AI) have learned to work around this by changing wording, using different languages, or mimicking normal business conversations. This is where Intent-Based AI Threat Detection makes a difference.
Imagine receiving two emails:
✅ Safe Email:
“Hey Alice, I just sent the payment for the invoice. Let me know when you receive it. Thanks!”
❌ Business Email Compromise (BEC) Email:
“Hi Alice, we processed the payment for your latest invoice. Click here to confirm the details.”
A traditional security filter might only look for the word ‘invoice’ and treat both emails as safe. But intent-based detection understands the meaning and purpose of the email. In the BEC example, there’s an urgent call to action (“Click here”) and an attempt to manipulate the recipient into taking an unsafe action.
Most traditional security tools, including rule-based filters, signature-based detection, and even machine learning (ML) models, rely heavily on pattern recognition rather than true contextual understanding. These methods fail against:
Even though traditional machine learning (ML) models improve upon rule-based detection, they still lack true intent understanding because:
Generative and Predictive AI models—like Llama3, ChatGPT, and other advanced transformers—work differently. Instead of just matching patterns, they understand relationships between words, tone, and intent. This makes them ideal for detecting phishing and BEC attacks, even when the attack looks different from anything seen before.
If ChatGPT were just a pattern-matching system, it would randomly throw out responses based on common words in a prompt. Instead, it generates contextually relevant answers by understanding user intent, allowing it to provide meaningful responses even when phrased in different ways.
Similarly, Inception Cyber Neural Analysis and Correlation Engine (NACETM ) uses Generative AI to analyze emails beyond simple word patterns, allowing it to:
Attackers no longer need obvious tricks like suspicious links or attachments. They now craft subtle, well-written messages that bypass traditional detection. With cost-effective AI models like DeepSeek, Al toolkits like FraudGPT and GhostGPT expect threat actors to amplify the sophistication, scale, and success rate of their attacks attacks - this is why we created Intent-based NACETM.