Skip to main content

What is Intent-Based Threat Detection for Email Security?

Bill Mann, Co-founder and CEO
Bill Mann, Co-founder and CEO
Feb 17, 2025 10:32:56 PM

I’m often asked, “What is Intent-Based Detection?” My co-founder, Abhishek Singh, has written research papers and spoken at conferences extensively on this topic, but I wanted to break it down in a way that everyone—whether technical or not—can understand.

Beyond Keywords: Understanding the Meaning Behind an Email

Traditional email security tools often look for specific words or phrases—like “invoice,” “password reset,” or “urgent request”—to flag potential threats. But attackers (scaled with AI) have learned to work around this by changing wording, using different languages, or mimicking normal business conversations. This is where Intent-Based AI Threat Detection makes a difference.

Example: Why Keywords Alone Don’t Work

Imagine receiving two emails:


✅ Safe Email:
“Hey Alice, I just sent the payment for the invoice. Let me know when you receive it. Thanks!”


❌ Business Email Compromise (BEC) Email:
“Hi Alice, we processed the payment for your latest invoice. Click here to confirm the details.”


A traditional security filter might only look for the word ‘invoice’ and treat both emails as safe. But intent-based detection understands the meaning and purpose of the email. In the BEC example, there’s an urgent call to action (“Click here”) and an attempt to manipulate the recipient into taking an unsafe action.

Why Traditional Email Security Approaches (Including Machine Learning) Don’t Understand Intent

Most traditional security tools, including rule-based filters, signature-based detection, and even machine learning (ML) models, rely heavily on pattern recognition rather than true contextual understanding. These methods fail against:

  • New and evolving attacks that don’t match previous examples.
  • AI-generated phishing emails that appear natural and contextually relevant.
  • Subtle deception techniques, where attackers modify sentence structures or use benign-looking messages to trick users.

Even though traditional machine learning (ML) models improve upon rule-based detection, they still lack true intent understanding because:

  • They rely on past examples – If an attack is new or uses unseen language, traditional ML models struggle to classify it correctly.
  • They depend on word frequency and token patterns – Instead of analyzing full context, many models still rely on statistical word correlations, making them easy to bypass with minor wording changes.
  • They are heavily tuned for specific attack types – Many security vendors use ML models fine-tuned for things like spam filtering or domain reputation scoring, but these do not understand nuanced social engineering tactics.
  • They struggle with AI-generated attacks – Generative AI can constantly modify phishing emails, meaning static ML models trained on older phishing techniques will miss newer, more sophisticated variations.

How Generative AI Models Understand Intent

Generative and Predictive AI models—like Llama3, ChatGPT, and other advanced transformers—work differently. Instead of just matching patterns, they understand relationships between words, tone, and intent. This makes them ideal for detecting phishing and BEC attacks, even when the attack looks different from anything seen before.

Example: How ChatGPT Shows the Power of Generative AI

If ChatGPT were just a pattern-matching system, it would randomly throw out responses based on common words in a prompt. Instead, it generates contextually relevant answers by understanding user intent, allowing it to provide meaningful responses even when phrased in different ways.

Similarly, Inception Cyber Neural Analysis and Correlation Engine (NACETM ) uses Generative AI to analyze emails beyond simple word patterns, allowing it to:

  • Identify the true purpose behind an email, regardless of wording changes.
  • Detects AI-generated attacks that traditional security tools miss.
  • Stop new phishing variations in real time—without relying on past attack patterns.

Why This Matters

Attackers no longer need obvious tricks like suspicious links or attachments. They now craft subtle, well-written messages that bypass traditional detection. With cost-effective AI models like DeepSeek, Al toolkits like FraudGPT and GhostGPT expect threat actors to amplify the sophistication, scale, and success rate of their attacks attacks - this is why we created Intent-based NACETM.

 

Post by Bill Mann, Co-founder and CEO
Feb 17, 2025 10:32:56 PM