AI Adoption Outpaces Security Measures, New Kroll Research Reveals
AI Adoption Outpaces Security Measures
In a recent report by Kroll, a leading financial and risk advisory firm, the alarming realities of artificial intelligence (AI) implementation in corporate environments have come to light. The report underlines that the rapid growth of AI applications has outstripped essential security measures and governance frameworks needed for safe operation.
Key Findings from Kroll's Research
The research, which surveyed 1,000 cybersecurity decision-makers across ten countries, reveals that 76% of organizations encountered security incidents involving AI applications or models in the past two years. This statistic points to a critical vulnerability in corporate cyber defenses, as nearly one-third report costs exceeding $1 million tied to these incidents.
Such findings suggest that despite an eagerness to harness AI's potential for enhancing operational efficiency and threat response, many companies lack the fundamental security practices required to do so responsibly. Remarkably, 90% of respondents recognized barriers to greater investment in AI security, primarily stemming from the perceived lack of clear ROI and insufficient executive comprehension of AI risks.
The Gap Between AI Innovation and Security
Currently, organizations typically allocate only about 13% of their AI budget to testing security controls or the models themselves. This disproportionate spending highlights a glaring disconnect between AI adoption and investment in necessary security infrastructures. Organizations with higher levels of cyber maturity—those demonstrating established security practices—are six times more likely to dedicate over 20% of their AI budget to security testing. This statistic starkly contrasts the behaviors of companies with low cyber maturity, of which nearly half reported negligible governance regarding the AI tools and services they adopt.
Dave Burg, Global Group Head of Cyber and Data Resilience at Kroll, stated, "Organizations must balance their eagerness to employ AI for operational advancement without neglecting the foundations of prevention, detection, and response to security incidents. The essence of the challenge lies in the fact that AI can magnify existing vulnerabilities if the right security measures are not in place."
Navigating the Dangers of AI Mismanagement
The rapid digital transformation sparked by AI adoption has fundamentally altered the corporate threat landscape. While the agentic AI ecosystem offers tremendous potential for innovation, it also enlarges the attack surface for cyber threats. As stated by Quiessence Philips, Head of Security Architecture and Engineering at Kroll, without simultaneous investment in security infrastructure, AI deployment may lead to profound consequences for organizations.
Organizations with robust security frameworks experience significantly fewer AI-related incidents, with 89% of those with low cyber maturity reporting incidents compared to only 54% of those with high maturity. Impressively, 46% of highly mature organizations have not faced any AI-related incidents in the last two years. The direct correlation between a strong security foundation and resilience against AI-related threats underscores the necessity for companies to develop such frameworks.
The Road Ahead
While integrating AI into business practices is non-negotiable in today's complex security environment, Kroll's research advocates for a dual approach—adopting AI technologies while robustly securing the underlying frameworks. As businesses move forward, embracing AI with an eye on security fundamentals will be essential in truly harnessing its potential without compromising safety. Exploring collaborative avenues, such as CrowdStrike's Charlotte AI AgentWorks Ecosystem, offers businesses a path to effectively operationalizing AI within their security infrastructure.
Kroll's full report is available on their website for those interested in further insights into the complex relationship between AI adoption and security measures.
Stay tuned for a related webinar where these findings will be discussed in more depth.