Endor Labs Reveals Alarming Risks in AI-Suggested Dependencies for 2025
Unveiling Critical Vulnerabilities in AI-Generated Dependencies
In a groundbreaking report released on November 4, 2025, Endor Labs, a leader in application security, has laid bare the security risks associated with dependencies recommended by AI coding assistants. The State of Dependency Management 2025 report found that a staggering 80% of these AI-suggested dependencies are fraught with vulnerabilities. This alarming revelation comes at a time when businesses are increasingly relying on AI to streamline their development processes.
The State of AI Coding Today
This fourth annual report emphasizes that AI-assisted software development is not a distant future prospect but a present reality. With enterprises adopting AI at an unprecedented rate, there is a significant increase in security risks stemming from unverified third-party code. According to Endor Labs' extensive analysis of over 10,000 GitHub repositories, findings show that only 20% of the dependency versions suggested by AI are safe for use. This indicates that a majority of the AI-sourced code remains unchecked and potentially harmful.
The Role of Model Context Protocol (MCP) Servers
Furthermore, the introduction of Model Context Protocol (MCP) servers, designed to connect AI agents with a multitude of third-party tools, has exacerbated these vulnerabilities. The report highlights that these servers serve as centralized access points where untested and vulnerable code can enter corporate systems, thus broadening the attack surface that organizations have to manage. Without appropriate governance and vetting mechanisms in place, companies are exposing themselves to unprecedented security threats.
Key Findings from the Report
The analysis conducted by Endor Labs sheds light on several critical insights regarding AI-generated code dependencies:
1. High Vulnerability Rates: It was revealed that between 44% to 49% of dependencies imported by coding agents contained known security vulnerabilities, illustrating how existing software can introduce significant risks if not rigorously examined.
2. Improving Safety with Security Tools: Notably, when AI coding agents were paired with robust security tools, the rate of safe dependency recommendations surged from 20% to an impressive 57%. This nearly threefold improvement underscores the importance of integrating adequate security measures within AI workflows. However, it also indicates the risks of relying solely on AI systems without human oversight.
3. Immaturity of the MCP Ecosystem: The rapid influx of over 10,000 MCP servers within a single year brings concerns regarding market maturity. Many of these servers—around 40%—exist without any licensing, and approximately 75% have been developed by individuals lacking enterprise-grade security protections. Furthermore, a concerning 82% of these servers interact with sensitive APIs, raising flags about potential data breaches and exploitation.
Governance is Key
Henrik Plate, a security researcher at Endor Labs, emphasizes the dual nature of AI's role in modern development. While AI coding agents facilitate innovation and speed in coding, they also create gateways for untrusted and potentially malicious code. The landscape of software development has markedly changed, presenting new opportunities along with significant threats.
Effective governance is essential to navigate this complex environment, allowing organizations to leverage the strengths of AI while ensuring robust security practices are in place.
To address these concerns, Endor Labs strongly recommends that companies take immediate action to enhance their oversight and governance surrounding AI-generated code. The full State of Dependency Management 2025 report is available for those interested in actionable insights to protect their systems from these emerging threats.