New Research Highlights Human Element Vulnerabilities Amid AI Adoption in Organizations
The State of Human Risk 2025: Navigating Cybersecurity Challenges in an AI-Dominated Landscape
In the ever-evolving world of cybersecurity, a recent report from KnowBe4, a leader in human and agentic AI risk management, sheds light on the daunting challenges organizations face as they navigate the integration of artificial intelligence (AI) into their operations. The state's comprehensive study involved 700 cybersecurity leaders and 3,500 employees and revealed that a staggering 96% of organizations struggle to adequately secure their human elements amidst the rapid technological shift.
Growing Pressure on Cybersecurity Leaders
As businesses increasingly adopt AI technologies, cybersecurity leaders are under mounting pressure to manage behavioral risks associated with the human workforce. The report highlights a troubling 90% surge in security incidents linked to the human element over the past year. This alarming increase is primarily attributed to social engineering attacks, such as phishing and Business Email Compromise (BEC), alongside risky behaviors and human errors.
The survey findings indicate that 93% of cybersecurity leaders reported incidents caused by cybercriminals who exploit employees’ vulnerabilities. Furthermore, a notable 57% increase in email-related incidents underscores the notion that email remains a primary battlefield for malicious activities. Data shows that 64% of organizations succumbed to external attacks leveraging email to exploit employees.
Insufficient Measures Against Human Error
Human errors continue to pose significant risks, as evidenced by the fact that 90% of organizations reported incidents stemming from employee mistakes. Even more concerning, malicious insiders were responsible for incidents in 36% of organizations surveyed. The report clearly outlines the urgent need for increased budget allocations to strengthen defenses against potential breaches, with 97% of cybersecurity leaders advocating for more resources to protect the human element.
The Dual Challenge of AI Integration
The past year has marked a pivotal shift in how AI applications are viewed within organizations, serving as both a facilitator of productivity and a source of risk. Security incidents tied to AI applications showed a 43% increase, representing the second-highest growth across all attack channels. Despite efforts to address AI-related risks—with an impressive 98% of organizations reportedly taking steps to mitigate these threats—cybersecurity leaders still classify AI-powered threats as their most pressing security concern.
This report highlights a worrying trend: 45% of leaders consider the ever-evolving nature of AI threats as their greatest challenge in managing behavioral risk, while 32% of organizations noted an uptick in incidents involving deepfakes—an emerging and increasingly sophisticated threat. In addition, employee satisfaction regarding current measures to address AI cybersecurity risks remains low, as evidenced by 56% expressing dissatisfaction with their company's approach. This discontent may lead employees toward unsanctioned platforms, heightening the risks associated with 'shadow AI'.
Email Remains the Most Vulnerable Channel
Looking ahead, the research suggests that email is poised to remain the most vulnerable channel in the coming years. The landscape also indicates a rise in multi-channel attacks through messaging applications and voice phishing (vishing), alongside cybercriminals exploiting AI tools to execute sophisticated attacks at an unprecedented scale. Organizations must swiftly adapt to these evolving threats or risk being exposed to attacks.
As highlighted by Javvad Malik, KnowBe4's lead CISO advisor, "The productivity gains from AI are too great to ignore, so the future of work requires seamless collaboration between humans and AI. Employees and AI agents will need to work in harmony, supported by a security program that proactively manages the risk of both." He emphasizes the critical need for human risk management frameworks to evolve, ensuring they encompass the AI component before vital business activities transition to high-risk, unmonitored platforms.
Conclusion
In conclusion, as we step into a future increasingly characterized by AI-powered workforces, organizations must prioritize the security of the human element. The findings from KnowBe4’s report serve as a call to action for comprehensive reform in how businesses approach cybersecurity. For more in-depth insights and recommendations, access 'The State of Human Risk 2025: The New Paradigm of Securing People in the AI Era'.