Netskope Research Highlights Growing Threat of Shadow AI in Enterprises
The Surge of GenAI Platforms in the Enterprise
In a groundbreaking report released by Netskope, a leading figure in modern security and networking, the growth of generative AI (GenAI) platforms has been alarming. The research showed a staggering 50% increase in usage among enterprise end-users within just three months, highlighting a shift in how companies are leveraging AI technologies. This surge is not just about innovation but also brings significant security implications, especially concerning shadow AI applications—AI tools that employees use without official approval.
The report indicates that a significant portion of all current app adoptions is attributed to shadow AI, raising concerns among IT departments and security teams. As businesses endeavor to safely enable Software-as-a-Service (SaaS) GenAI applications, the rise of shadow AI compounds potential vulnerabilities, making the need for comprehensive oversight and governance even more pressing.
What Are GenAI Platforms?
GenAI platforms serve as vital infrastructure that enables organizations to develop tailor-made AI tools and agents. Their flexibility and ease of use have made them an appealing choice for employees looking to implement AI solutions quickly. Since many enterprises have already begun integrating AI into their operations, the number of users accessing GenAI platforms has rocketed to 41% as of May 2025. The leading players in this domain include Microsoft Azure OpenAI, with about 29% of organizations using it, followed closely by Amazon Bedrock (22%) and Google Vertex AI (7.2%).
The trend isn't merely about numbers; it's about risk management. With a staggering 73% increase in network traffic linked to GenAI platforms, businesses are compelled to enhance their data loss prevention (DLP) strategies to counteract potential data breaches. Ray Canzanese, Director of Netskope Threat Labs, stresses that organizations must strike a balance between fostering innovation and ensuring robust security measures are in place to mitigate risks associated with shadow AI.
Diverse Innovations in On-Premises AI
Organizations are evaluating diverse strategies to innovate swiftly using AI, particularly through on-premises deployments. Currently, 34% of businesses are utilizing large language model (LLM) interfaces for AI applications. Ollama stands out as the clear leader in this area, capturing a 33% market share, while other platforms like LM Studio and Ramalama have just begun to gain traction.
Employees don't hesitate to experiment either; 67% of organizations reported accessing AI resources from platforms like Hugging Face. The surge in AI agent adoption is noteworthy, with 39% of enterprises using GitHub Copilot and 5.5% running agents from popular frameworks locally. This growing enthusiasm results in a scenario where on-premises agents obtain more data from SaaS platforms than ever before, creating a need for meticulous governance around API access.
The Evolving Landscape of SaaS AI Usage
The accelerating pace at which new generative AI applications are being adopted is evident. Netskope identifies over 1,550 distinct SaaS applications linked to GenAI, a considerable leap from just 317 in February 2025. Enterprise users streamline their methodology towards specific tools, adopting purpose-built software like Gemini and Copilot for use across their operations. These AI applications are becoming increasingly integrated into productivity suites, which fosters a collaborative environment but also raises security concerns.
While ChatGPT had been a popular choice since its introduction in 2023, recent trends suggest a decrease in its enterprise usage, the first since it was monitored. Other applications like Anthropic Claude and Grammarly have seen upticks in their adoption, indicating a shifting preference among users.
Strategies for Ensuring AI Governance
Given the rapid adoption of diverse GenAI tools, security personnel and business leaders are urged to implement a robust approach towards AI governance. Netskope outlines several steps that organizations should undertake to secure their environment while still supporting innovation through AI.
1. Assess the GenAI Landscape: Identify which GenAI tools are in use across the organization and understand user engagement with these technologies.
2. Fortify App Controls: Establish strong policies regarding company-approved GenAI applications, creating mechanisms for real-time user coaching and access management.
3. Inventory Local Controls: For organizations employing on-premises GenAI infrastructures, apply relevant security frameworks to safeguard sensitive data, users, and networks.
4. Continuous Monitoring: Maintain vigilant oversight of GenAI usage to identify shadow AI occurrences and stay updated on ethical standards and regulatory shifts.
5. Evaluate Emerging Risks of Agentic Shadow AI: Collaborate with proactive users to define actionable policies that manage the risks associated with unofficial AI implementations.
Conclusion
As the use of GenAI platforms expands at an unprecedented rate, organizations face both enormous opportunities and significant challenges. Understanding the implications of shadow AI is crucial to confidently navigating this evolving landscape. Netskope's insights serve as a vital resource for businesses looking to securely harness the potential of AI technologies while adequately addressing security concerns.