AI Security Challenges: Bridging the Gap Between Development and Governance in Software
In the fast-evolving landscape of technology, artificial intelligence is reshaping the way software is developed, bringing both opportunities and significant challenges in ensuring security. A recent report by
Security Journey, a prominent provider of application security education, shines a light on the pressing need for organizations to bridge the widening gap between faster software development processes and essential security measures.
The Escalating Risks of AI-Driven Development
As organizations increasingly incorporate AI into their software development lifecycles, the pace of code generation has accelerated. Developers can now produce software at unprecedented speeds, often leveraging tools that include large language models and code generation assistants. However, this rapid advancement isn't without consequences. The report titled
Closing the Security Gap in AI reveals that this race to innovate can lead to a lack of understanding regarding the security implications of AI-generated code.
During a roundtable discussion held in June 2025, experts from various backgrounds, including application security, development, and AI, deliberated on the realities of AI-generated vulnerabilities. Their consensus underscores that while AI tools provide efficiency, they also expand the surface for potential security breaches. As such, organizations must recognize these vulnerabilities and adapt accordingly.
Governance Needs to Reflect Current Practices
One striking recommendation from the report emphasizes the need for governance frameworks to accurately portray how employees actually engage with technology. Often, policies surrounding AI are crafted without a genuine understanding of the technology's current usage patterns. When governance is inflexible or too reactive, it inadvertently pushes teams toward using shadow AI—unmonitored technologies that can exacerbate security risks instead of mitigating them.
Empowering Developers with Support and Accountability
With AI shifting more decision-making responsibilities onto developers, many of whom may not possess adequate security training, organizations must take proactive steps. This involves offering continuous education and timely support to equip developers with the necessary knowledge and skills to assess associated risks competently.
Evolving Security Culture to Align with Technology
Furthermore, the report stresses that fostering a culture of security is essential for any successful integration of AI into development practices. Teams are more likely to prioritize security when it becomes an integral part of their daily rountines. Encouraging positive reinforcement, establishing clear protocols, and designating internal champions can significantly shift workplace attitudes towards recognizing the importance of security.
Addressing Talent Shortages in an AI-Driven Landscape
AI's adoption can also contribute to talent gaps within organizations. A notable concern is the overreliance on AI tools, which may hinder junior developers from gaining the foundational experience they need. If organizations fail to invest in nurturing their human capital alongside technological advancements, they risk becoming increasingly vulnerable over time.
The Path Forward: Education and Cultural Shifts
Security threats are becoming more pronounced as malicious actors exploit vulnerabilities within AI-generated codebases. The report warns that the frequency of security incidents could intensify as organizations struggle to keep up with the pace of AI evolution. To counter these challenges, a strategic focus on education, thorough testing, and a comprehensive cultural shift emphasizing security is crucial.
Dustin Lehr, an AppSec Advocate at
Security Journey, aptly notes, "This isn't a tooling problem - it's a people problem. If we do not align the rapid adoption of AI with equally aggressive training and governance measures, we expose ourselves to serious vulnerabilities. Developers require more than just policies—they need ongoing training, robust support, and a culture that enables them to make secure decisions."
This report not only outlines the complicated challenges facing the industry but also provides actionable insights geared towards closing the existing security gaps. To gain a deeper understanding and explore the full range of recommendations, organizations can download the comprehensive report,
Closing the Security Gap in AI from the Security Journey website.
About Security Journey
Security Journey is committed to empowering organizations by minimizing vulnerabilities through education and training in secure application development. Their programmatic focus on secure coding education includes a vast collection of instructional videos and practical coding exercises within sandbox environments. By equipping teams with a strong foundational knowledge and a security-first approach, they help bridge the critical gap between development and security, striving for a culture where secure software development is the norm. For more information, visit
Security Journey.