AI-Powered Email Attacks Have Increased Significantly in 2025, Prompting AegisAI to Unveil Vanguard
AegisAI Unveils Vanguard to Combat Rising AI Email Threats
In 2025, cyber threats reached an alarming peak as artificial intelligence-generated email attacks surged fivefold, according to recent research from AegisAI. This substantial increase has prompted the launch of their innovative defense strategy called Project Vanguard, specifically developed to counter advanced adversarial CAPTCHAs and enhance email security in today's rapidly evolving landscape.
The Escalating Threat of AI-Generated Email Attacks
In the new report, titled State of the AI Threat in Email, AegisAI analyzed over 20,000 phishing, scam, and malware emails. The findings highlighted that AI-driven phishing attempts skyrocketed by 500%, making up 13.9% of all phishing incidents observed last year. What's even more concerning is the effectiveness of these AI-generated attacks: they successfully bypass traditional email filters at nearly double the rate of human-created emails, infiltrating users' inboxes over 50% of the time.
The report revealed a staggering 72.6% of successful AI attacks evaded email authentication controls, as attackers often exploit compromised accounts with verified settings. This alarming trend underscores an urgent need for advanced security measures to protect users from increasingly sophisticated phishing tactics.
Introducing Project Vanguard
AegisAI's response to these escalating threats is the introduction of Vanguard, a groundbreaking proactive defense aimed at tackling evasive threats hidden behind CAPTCHAs, cloaked webpages, and malicious documents that often escape traditional security measures. Vanguard is backed by a team with extensive experience building Google's reCAPTCHA, ensuring a formidable approach to counteract this wave of AI-driven exploitation.
How Vanguard Works
Vanguard extends AegisAI's existing agents that monitor inboxes by following suspicious links or attachments outward into the open web. When an inbox agent flags a dubious element, Vanguard seamlessly investigates the web environment, tracing the potential malicious path to wherever it leads—be it a phishing site or a compromised document—and promptly generates a detailed threat report within minutes.
This strategic shift significantly enhances the company's defense capabilities, allowing it to neutralize threats even before they reach users’ devices—an essential evolution in the ongoing battle against automated deception.
Insights from AegisAI Leadership
Cy Khormaee, Co-Founder and CEO of AegisAI, states, “The only effective response to AI-powered attacks is an AI-powered defense. Given how traditional filters are essentially a coin flip against modern LLM-generated threats, Project Vanguard represents a critical advancement in safeguarding user experiences.”
Ryan Luo, Co-Founder and CTO, echoes this sentiment, noting that Vanguard empowers organizations to initiate actions against sophisticated bot strategies that traditional methods, which merely filter emails, have failed to thwart.
Early Access and Future Plans
As part of their commitment to staying ahead of cyber threats, AegisAI will begin early customer testing for Project Vanguard later this year, with live demonstrations scheduled for the upcoming RSA Conference 2026. Organizations eager to participate in the early access program can apply through the AegisAI website.
The insights from the comprehensive Artificial Adversaries research report are accessible through AegisAI's platform, equipping users with informed knowledge to help combat the rising tide of AI-driven email scams.
Conclusion
With the landscape of email security rapidly changing, the introduction of AegisAI's Vanguard demonstrates a proactive and innovative approach to defending against the sophisticated and evolving threat of AI-generated phishing attacks. As technology progresses, so too do the strategies to protect against it, and the collaboration between advanced AI capabilities and protective measures is now more crucial than ever.