OpenText Survey Reveals Alarming Trends in AI Adoption and Security
Recent findings from a survey conducted by OpenText, in collaboration with the Ponemon Institute, have unveiled a troubling situation regarding the rapid integration of Generative AI within organizations worldwide. The survey, titled "Managing Risks and Optimizing the Value of AI, GenAI & Agentic AI," underscores that over 52% of companies have begun implementing Generative AI either fully or partially. However, significant delays in establishing necessary security and governance frameworks to manage these technologies have been reported.
This discrepancy paints a stark picture of a growing crisis within industries, highlighting that while adoption of Generative AI is swift, the attention to requisite governance and security measures is alarmingly lacking. Muhi Majzoub, OpenText's Executive Vice President of Product and Engineering, emphasized that adopting AI is not merely about deploying the tools but understanding the responsibilities that come with their use. He stated, “Security and governance are foundational to deriving true value from AI. By embedding these aspects from the start, organizations can operate with greater transparency and continuously monitor the results produced by AI systems.”
The State of AI Maturity in Organizations
The survey paints a concerning portrait of AI maturity, revealing that only 20% of the organizations reported achieving a stage where there is comprehensive integration of AI with an effective assessment of security risks. Furthermore, less than half (43%) have embraced a risk-based governance strategy for their AI systems. As AI systems grow more autonomous and become integral to critical business operations, bridging this maturity gap is essential for ensuring reliability, compliance, and long-term business value.
Key findings from the survey indicate that nearly 79% of organizations have yet to reach a level of full AI maturity capable of effective cybersecurity activities and risk assessments. Moreover, only 41% of responders have established AI-specific data privacy policies, while 62% deem it challenging to minimize risks associated with model biases, especially those relating to ethical and responsible AI principles.
In terms of managing biases, security threats, and ethical implications of AI-related risks, fewer than half (43%) of the respondents have adopted a risk-based approach to AI governance. Additionally, 58% find it very difficult or extremely difficult to mitigate risks related to prompts and inputs that may lead to misleading, inaccurate, or harmful responses, highlighting significant gaps in user risk management strategies.
Moreover, 59% of respondents expressed concern that compliance with privacy and security regulations would be more challenging due to AI implementation, yet only a modest 41% of organizations have established tailored data privacy policies for AI.
Trustworthiness and Reliability in AI
As organizations increasingly utilize AI to enhance operational efficiencies—security operations included—issues surrounding trust, reliability, and explainability remain pertinent. This raises concerns that tools designed to enhance security may fall short in effectiveness and autonomy due to the lack of established governance and maturity frameworks. Despite recognizing AI's potential in swiftly detailing anomalies and new threats, only 51% of respondents agree that it is effective in reducing the time required to detect such threats.
Participants also reported operational reliability challenges, with 45% indicating that errors in AI decision-making rules significantly hinder effectiveness, while 40% noted inaccuracies in AI's input data. The road to fully autonomous AI remains a distant goal, with less than half (47%) believing their AI models could self-learn robust norms and autonomously make safe decisions. As the autonomy of AI models increases, the reliability placed in them appears modest, with over half (51%) insisting that human oversight remains necessary due to the rapid adaptability of attackers.
Muhi Majzoub stated, “Leading the next phase of AI adoption will be companies that can incorporate transparency and governance functions into AI frameworks. As AI permeates daily business operations, organizations must maintain robust information management foundations along with clear governance frameworks, policy-based management, and ongoing oversight capabilities. Equally critical is ensuring that innovation is responsibly scaled while generating measurable business value from the outset by carefully aligning AI with appropriate data, security measures, and adequate supervisory structures.”
For more detailed insights, you can access OpenText’s additional report on the implications of Generative AI in business operations
here.