GenAI vs. Security: The Industrialization of Phishing Attacks (Part 2)

Welcome to Phishing-as-a-Service. If Blog 1 was the diagnosis, this is the autopsy.
The handcrafted, artisan feel of phishing is mostly gone and has been replaced, at least for the most part, by something industrial. Generative AI has amplified its volume but also made it possible for almost anyone to run professional-grade phishing campaigns at scale.
From Craft to Code: The Rise of Automated Deception
Generative AI is giving attackers something they’ve never had before: scale and polish.
In 2025, we’re seeing:
- 1,265% increase in phishing volume tied to AI tooling (TechRadar)
- Deepfake audio and video used in CEO fraud (Business Insider)
- Attackers using Lovable AI to spin up phishing sites in minutes (SC Media)
- Quishing campaigns that weaponize QR codes (Arxiv Research)
This time around, attackers aren’t just cloning your company’s logo. They’re impersonating your executives, mimicking their tone, and even fabricating calendar invites that look like they came straight from your internal systems.
Attackers No Longer Need Access, Influence Is the Game Now
This marks a fundamental shift: threat actors don’t have to break into your systems to cause damage. Instead, they just have to persuade a decision-maker to click, reply, or approve a transfer.
And with AI, that influence is precise and adaptive.
- Emails are customized based on scraped social data
- Urgency is context-aware ("Following up on yesterday's board notes...")
- Tone is mimicked using NLP models
In other words, attackers aren’t just replicating your message, they’re investing time and effort on reverse-engineering your executive's communication style, down to cadence, tone, and context.
The Technical Weaponization of Machine Learning
What makes this moment unique is how attackers are now co-opting the very machine learning models defenders built. Research from firms like HiddenLayer has highlighted how threat actors manipulate ML pipelines to generate synthetic but convincing phishing emails or even evade AI-powered spam filters. HiddenLayer’s work on adversarial machine learning shows how attackers can subtly alter an email’s structure to bypass detection while still reading perfectly to a human recipient.
Meanwhile, companies like Abnormal AI and Microsoft Threat Intelligence have documented attackers using LLMs to generate phishing lures and also to dynamically adapt them based on real-time responses from victims. The result is phishing campaigns that operate more like chatbots, holding conversations, adjusting tone, and persisting until the attacker gets what they want.
One 2024 study from the University of Cambridge demonstrated that LLM-crafted phishing emails achieved a 30% higher click-through rate than human-written phishing attempts—a sobering sign of how effective machine-driven deception can be.
AI vs. AI: The Only Way Forward
Traditional defenses are under strain. Static filters often miss contextual cues, and signature-based detection struggles to keep pace with polymorphic campaigns. As a result, defenders are shifting toward adaptive AI systems that assess communication intent and behavior rather than relying solely on known indicators.
For example, research and tools are emerging that baseline normal communication patterns within organizations, making anomalies easier to flag. Others are focused on strengthening machine learning models against adversarial manipulation, while some academic efforts are exploring whether large language models can be used to detect phishing attempts that appear ‘too polished’ to be human.
Collectively, these approaches reflect the industry’s recognition that phishing is now an AI-enabled threat that must be countered with equally adaptive techniques.
Teaser: In our final post, we’ll talk about how modern defense needs two legs: AI-powered detection and human-focused training (like DEG's Email Phishing Exercise). One without the other just isn’t enough anymore.