Bridging the Gap of AI Policy Awareness in the Workplace
In a recent survey conducted by KnowBe4, a leading provider of cybersecurity platforms specializing in integrated human risk management, an alarming disconnect has been unveiled between the use of AI tools and the awareness of related corporate policies among employees. The study, which surveyed employees from Germany, South Africa, the Netherlands, France, the United Kingdom, and the United States, revealed that while a significant number of workers are utilizing AI tools, a very small fraction is aware of their organization's policies regarding such use.
Key Findings of the Survey
The statistics are telling: an average of 60.2% of employees reported using AI tools in their workplaces; however, only 18.5% were aware of their company’s AI usage policies. This stark difference indicates that much of the AI use within organizations is occurring without clear guidelines or oversight. Furthermore, approximately 10% of employees admitted to inputting customer data into AI tools during their work, which raises serious concerns about data privacy and security.
Regional Differences in AI Utilization
The survey identified notable variations in AI usage rates across the regions surveyed. While the global average stood at 60.2%, France reported the lowest usage rate, with only 54.2% of employees using AI tools. This indicates a slower pace of AI integration within French workplaces. Conversely, South Africa led with a usage rate of 70.1%, indicating a more widespread acceptance and utilization of AI technology in the region.
Ongoing Challenges in Policy Awareness
Another concerning finding is that, on average, 14.4% of employees indicated they were not familiar with their company’s AI policy. This issue is particularly pronounced in the Netherlands, with 16.1% unaware of the policy, and in the UK, where this figure is 15.8%. There is a clear necessity for organizations to enhance their policy communications and training frameworks to ensure that employees are adequately informed.
Delays in Approved AI Utilization
Among employees using AI, only a mere 17% reported that their usage was under the supervision of IT departments or security teams. South Africa had the highest figure in this regard at 23.6%, yet overall, the numbers remain low. This underlines a pressing need for organizations to actively provide and promote approved AI solutions, ensuring a safer and more compliant usage model.
The findings of this survey stress the urgency for organizations to bridge the gap between AI utilization and policy awareness. It is not enough to simply draft policies; companies are required to actively promote these policies within their teams. Comprehensive training on ethical and safe AI usage is crucial, alongside providing employees with easily accessible, approved AI tools. By taking these proactive steps, organizations can significantly mitigate the serious risks associated with unchecked AI usage.
Insights from Roger Grimes
Roger Grimes, a data-driven defense evangelist at KnowBe4, commented on the issue stating, "The gap in AI governance is like a ticking time bomb for organizations. The fact that the majority of employees are using AI while less than 20% understand the rules governing its use is a very serious issue. While AI tools are incredibly powerful, without clear policies and training, there is a risk that employees may inadvertently input sensitive information like customer data into unsecured systems. Cyber risks are often viewed in terms of external threats, but in the age of AI, internal misuse, even when unintentional, can lead to severe breaches of data, compliance violations, and damage to an organization’s reputation."
Survey Overview
This study was conducted by Censuswide, targeting 12,037 employees across six countries (Germany, South Africa, the Netherlands, France, the United Kingdom, and the United States) who use computers as part of their jobs. Data collection took place from July 17 to July 25, 2024. Censuswide ensures adherence to the Market Research Society’s codes of conduct and ESOMAR principles. The firm is also a member of the British Polling Council.
For more detailed information and best practices regarding security measures, please visit the official KnowBe4 website. Recently, KnowBe4 Japan joined the AI Governance Association (AIGA), further contributing to discussions around safe and flexible AI usage, emphasizing the importance of understanding and promoting AI governance. For more details, kindly refer to the KnowBe4 Japan press release.
About KnowBe4
KnowBe4 is trusted by over 70,000 customers worldwide. It assists employees in making smarter security decisions daily, strengthens security culture, and effectively manages human risks. With its AI-driven "Best of Suite" platform for human risk management, KnowBe4 trains human behavior, creating a robust defense layer that can adeptly respond to emerging cyber threats. Through its HRM+ platform, they provide security awareness training, compliance education, cloud email security, real-time coaching, cloud-based anti-phishing, and AI defense agents. Positioned as the only global security platform vendor focused on human risk, KnowBe4 transforms employees from the biggest attack targets into the most significant defense layer for organizations.