New Research Highlights Dangerous Blind Spots in AI Agent Permissions Management

Unpacking the AI Agent Permissions Dilemma



In a groundbreaking report released on March 19, 2026, by Oso and Cyera, startling data about enterprise permission management has come to light. The two firms, known for their expertise in permission management and AI security, analyzed a staggering 2.4 million workers and over 3.6 billion application permissions. The findings reveal a critical oversight in corporate security protocols, showing that a staggering 96% of application access is left dormant. This blind spot not only highlights the inefficiencies in how permissions are managed but also poses significant risks as AI agents become more prevalent in the workplace.

Key Findings of the Research


The research unfolds several crucial statistics that shed light on how permissions are currently utilized in enterprises:
  • - 96% Dormant Permissions: Employees rarely engage with over 96% of the permissions granted to them, creating a massive vulnerability as AI agents may inherit these permissions without scrutiny.
  • - Systemic Over-Provisioning: Over 80% of SaaS access is handled via static profiles, indicating that many organizations rely on outdated permission models that are challenging to audit.
  • - Inaccessible Sensitive Data: An overwhelming 91% of sensitive data, which includes personally identifiable information (PII), financial, and health records, remains untouched by human users, yet 13% of the workforce retains standing access to this data.
  • - Risk of Data Alteration: Nearly 31% of users have the capability to modify or delete sensitive information without awareness of the potential fallout.

The Implications of Dormant Permissions


With the accelerating deployment of AI agents in workplaces, organizations are beginning to adopt machine-driven workflows at an unprecedented pace. Research from industry leaders such as IDC projects that spending on AI applications could reach $1.3 trillion by 2029, while Gartner anticipates that 40% of enterprise applications will integrate AI agents by 2026.

According to Graham Neray, co-founder and CEO of Oso, the concept of unused permissions was once an annoyance that could be managed with human oversight. However, the versatility and unsupervised nature of AI agents in today’s digital landscape render these stale permissions as potential ticking time bombs for security breaches. Unlike human employees who operate within certain constraints, AI agents perform without rest, interacting directly with systems and APIs, raising the stakes for error and misuse.

Incorporating AI agents into these environments necessitates a meticulous reevaluation of existing permission frameworks. One incident in particular spotlighted the risks when agents, equipped with broad access, inadvertently deleted critical production databases or recklessly exfiltrated sensitive information due to permissions that were never properly scrutinized.

Expert Insights


Mark Hillick, Chief Information Security Officer at Brex, emphasizes the need for proactive design in deploying AI agents. He asserts, “Speed without control is risk, and control without speed is a blocker,” underscoring the balance organizations need to strike in order to remain agile while ensuring robust security measures.

Similarly, Nancy Wang, Chief Technology Officer at 1Password, echoes these concerns, remarking that traditional access models designed for human workers do not holistically align with the operational nature of AI agents. Wang advocates for new identity systems that create a tighter relationship between agent actions and human intent to mitigate potential risks.

Moving Forward


In light of these findings, Oso and Cyera urge organizations to reconsider their approach to data security in an era dominated by powerful AI agents. The data they provide will undoubtedly invoke deep discussions about the future of data security and compliance frameworks necessary for mitigating risks associated with agentic technology.

For comprehensive details and recommendations regarding securing agentic deployments, the full research report is available for review at Oso’s research page.

As organizations continue to integrate AI into their operations, the lessons drawn from this research are critical. The call to action is clear: without securing permissions, organizations cannot effectively secure AI—an essential tenet for any enterprise aiming to navigate this complex technological landscape successfully.

Topics Business Technology)

【About Using Articles】

You can freely use the title and article content by linking to the page where the article is posted.
※ Images cannot be used.

【About Links】

Links are free to use.