Skip to main content

Neural Analysis and Correlation Engine vs. AI Generated Attacks

Large language models have led to the emergence of AI tools that threat actors use to craft sophisticated attacks. Exploitation toolkits leveraging AI, such as WolfGPT, EscapeGPT, XXXGPT, Evil-GPT, FraudGPT, WormGPT, GhostGPT, and Dark LLMs, are proliferating at an alarming rate. Now, with the design of low-cost models like DeepSeek, malicious AI toolkits will increasingly be weaponized by threat actors to amplify the sophistication, scale, and success rate of their attacks, posing a significant and growing threat to global cybersecurity.

How will AI Models Change the Threat Landscape?

In the case of email-delivered attacks, payload exploitation can broadly be categorized into three main segments. Malicious conversation payloads leading to attacks such as BEC, malicious attachments delivering malware such as ransomware, and malicious call-to-action URLs leading to phishing.

BEC, Malicious Conversation Payload

BEC scams exploit businesses or individuals engaged in routine wire transfers by compromising communication channels through social engineering or computer intrusion. These scams aim to conduct unauthorized fund transfers, with email conversations serving as the primary payload.

How will Generative AI affect BEC?

  • Generative AI models can generate countless variations of conversational payloads with the same core meaning. Fine tuned binary and muti-class classifier Neural networks like BERT, if not comprehensively trained on all possible semantic variants, may fail to recognize novel expressions, leaving organizations vulnerable to BEC exploitation.
  • The generation of variations can also lead to missing key features that detection models rely on. For example, a sense of urgency is a common feature contributing to verdict generation in traditional models. However, AI-generated variations may inadvertently omit urgency cues, causing certain detection algorithms to fail in identifying the threat, leading to bypasses.
  • Language barriers no longer limit attackers, as Generative AI can craft BEC emails in multiple languages with native fluency, increasing the attack surface globally

Ransomware, Malicious Attachments

Malicious attachments or malicious code are used to deliver malware such as ransomware,banking trojan, RAT, downloader dropper which can be used to deliver families including SocGholish, Cobalt Strike, IcedID, BumbleBee, and Truebot, etc. The attachments can lead to phishing attempts as well designed to steal sensitive personal, financial, and/or login credentials. Phishing is discussed in the next section. 

How Will Generative AI Affect Malware?

  • Crafting Malicious Payload: Generative AI can generate variants of malicious code at scale. Algorithms that rely on static scanning or file-based features, such as decision tree models, are susceptible to evasion if they do not account for all possible variations. For instance, file size is often a key feature used by detection algorithms to determine malicious intent. Attackers can introduce dead code to artificially inflate the file size, effectively bypassing such detection mechanisms.
  • Evasions: Detection technologies such as Sandboxes, AV (Signatures, Heuristics-based, and ML/AI applied to malicious files) rely on identifying malicious payloads and features. However, these can be concealed through evasive techniques during scanning, leading to detection bypass. Generative AI enables the automated addition of both known and novel evasions at scale, allowing malware to obscure its malicious behavior/payload during analysis, further increasing its ability to evade detection.

Phishing Malicious Call to Action URL

 As per the internet IC3 report phishing is defined as the use of unsolicited email, text messages, and telephone calls purportedly from a legitimate company requesting personal, financial, and/or login credentials. Malicious Call to Action URLs inside the email will lead to Phishing or delivery of malware. In this section we will discuss call to action leading to Phishing. 

How will Generative AI Affect Phishing?

  • Crafting Phishing Pages: Generative AI can be leveraged to generate diverse variants of phishing pages impersonating a brand at scale. Detection algorithms that rely on static analysis or features extracted from landing URLs, such as decision tree models, are prone to evasion if they do not account for all possible variations having different features. For instance, phishing pages typically contain fewer hyperlinks compared to legitimate websites, a feature often used in detection models. However, AI-generated phishing pages can dynamically adjust such attributes to evade detection of Phishing Pages. 
  • Adding Evasions to Phishing URLs: Traditional detection technologies (Sandbox, Signatures, ML/AI applied to phishing pages) rely on analyzing the final landing phishing page to determine a verdict. However, phishing URLs employ evasive techniques such as redirects, QR codes, cloud hosting, CAPTCHAs, links hidden behind CDNs to obscure the final landing page at the time of inspection, leading to detection bypass. Generative AI can automate and scale these evasions, making it increasingly difficult for detection technologies to reach the final phishing page, further enhancing evasion capabilities.

How Does Neural Analysis and Correlation Engine (NACE) Prevent AI generated attacks? 

Malicious Attachments and URLs:

NACE takes the first principles approach to solve the problem of detection of malicious attachments and URLs which can be generated by AI. It does not rely on malicious payload or final phishing pages for decision making. Non-reliance on malicious payload makes it immune to AI generated evasions as well which are added to hide the malicious payload. 

NACE makes use of semantic and thematic meaning of email to derive the intent and contextual relationship between Intent, SMTP headers and auxiliary information from URL and files enables to determine if URL or attachment is malicious or benign. 

BEC and Conversation Payload: 

To detect BEC attacks, regardless of whether the email body has been authored by a threat actor or AI, NACE utilizes multiple deep learning models to identify intent, topics, tone, sentiment, tactics, call-to-action, and other relevant factors within the body and attachments of the email. This is achieved through a combination of techniques, including zero-shot classification using LLMs and fine-tuned models trained on variations of semantics, to enhance classification and natural language understanding. Additional features are extracted from the SMTP headers, creating a rich header-based feature set. The contextual relationship between intent and SMTP headers allows for an accurate final verdict on BEC detection.

Deriving the Intent From an Email 

NACE leverages pre-trained and fine-tuned multi-class classifiers, trained on a well-labeled dataset, trained on variations of semantics generated by AI. However, a multi-class classifier may be vulnerable to evasion if it has not been exposed to a diverse range of semantic variants. To mitigate this, NACE employs zero-shot semantic classification through prompt engineering. By harnessing the power of an LLM, NACE identifies semantic variations via zero-shot classification, ensuring that these variants, which serve as features, are effectively detected.

Subsystem for Semantic and Thematic Analysis within the NACE Framework

In addition to semantic analysis, NACE uses hierarchical topic and phrase modeling as part of its feature set for decision-making. This approach provides thematic insights by generating a structured representation of topics, phrases, and subtopics. It ensures that, even when semantic variations occur, the underlying themes are consistently identified, adding an extra layer of detection to maintain consistent thematic identification across variations in semantics. The design also makes use of similarity analysis to extract semantics from an email. Semantics used by threat actors to deliver malicious attachments and URLs are stored as embeddings. For each incoming email, text is extracted, embeddings are computed, and cosine similarity is measured to identify the semantics. The multi-layer approach ensures that the intent of an email is accurately identified, whether it is generated by a threat actor or AI.

Conclusion 

Traditional detection methods—signature-based, sandboxing, and machine learning—rely on analyzing malicious payloads or phishing URLs. However, generative AI rapidly evolves these threats, creating evasive variants at scale, outpacing conventional defenses. NACE shifts the focus from malicious payloads to intent, leveraging contextual relationships between intent, SMTP headers, and auxiliary file/URL data to determine malicious URL or attachments. This ensures robust detection, even against AI-generated threats. For BEC attacks, generative AI produces highly varied conversational payloads, evading feature-based detection. NACE counters this by using zero-shot LLMs, fine-tuned models to extract intent, tone, and tactics. Its multi-layered semantic analysis, combined with SMTP header insights, enables precise identification of BEC messages—whether crafted by a human or AI.