CARVIEW |
AI in DevSecOps: A critical crossroads for security and risk management
Oct 08, 2025 | 6 min read
Black Duck’s recently released report, “Balancing AI Usage and Risk in 2025: The Global State of DevSecOps” is a comprehensive survey of more than 1,000 DevSecOps software and security professionals. The report makes it clear that AI use is no longer a future consideration—it is a present imperative. It’s time for DevSecOps teams to act decisively in harnessing AI’s power while mitigating its risks.
AI in DevSecOps: Game-changer or dangerous disruptor?
Our report reveals a clear trend in the adoption of AI coding assistants. A combined 43.66% of respondents report using AI tools frequently or constantly, indicating that AI is deeply integrated into their daily workflows. This shift toward AI-driven development highlights the growing recognition of AI’s ability to streamline and optimize the DevSecOps pipeline. Almost all—96.7%—organizations are now leveraging open source models for building both internal and external products and software.
The rapid proliferation of AI demands immediate governance and oversight to prevent systemic vulnerabilities. For example, our data highlights a significant “shadow AI” problem, with 10.69% of respondents admitting to using AI coding assistants without official permission, in an unverified or unmonitored way. This unauthorized use can introduce security risks and compliance issues, underscoring the need for robust governance frameworks to ensure that AI tools are used safely and effectively.
A double-edged sword
As the pace of development quickens and the threat landscape becomes increasingly complex, the tension between development and security has never been more pronounced. Over 56% of our report’s respondents state that AI coding assistants have ushered in novel security risks including the potential for introducing vulnerabilities, regulatory compliance issues, and proprietary code being inadvertently incorporated into training models.
While AI can automate and streamline the coding process, it can also inadvertently introduce bugs or security flaws that will not be immediately apparent. This is particularly dangerous in environments where code is rapidly developed and deployed, and where there is little time for thorough manual reviews. Additionally, the complexity of AI systems themselves can create new attack surfaces, making it crucial for DevSecOps teams to continuously assess the security implications of AI integration into the SDLC.
AI can also introduce significant compliance issues. The regulatory landscape is stringent, and the use of AI in development processes must adhere to various standards and guidelines. This risk is noted among our report’s respondents, with 14.99% concerned that AI-generated code will lead to legal and financial repercussions. Moreover, the transparency and auditability of AI-generated code is often an issue, making it difficult to trace and justify the code’s compliance status.
Despite these risks, most of our respondents believe that AI is a powerful ally in the fight for security. A majority—63.33%—of DevSecOps professionals agree that AI has tangibly improved their ability to write more-secure code. This is particularly evident in the early stages of the development pipeline, where AI tools can provide real-time feedback and directives to developers. For instance, 19.78% of our survey respondents noted that AI provides faster identification of real security vulnerabilities in code as it’s written. This not only enhances the security of the final product but also reduces the time and resources required for manual security testing.
It's important to note that a collaboration strategy between AI tools and human overseers is paramount to achieving the reliable DevSecOps processes. AI can automate and speed the process, but the human intuition and hands-on expertise is indispensable to secure code development. By integrating those strengths, DevSecOps teams can create a synergistic approach that ensures a robust and responsive security environment. Human experts can provide context and make nuanced decisions, while AI can handle the repetitive and data-intensive tasks.
Strategic asset or security liability?
A critical challenge unearthed by the report is a “shadow AI” problem, where AI tools operate outside the boundaries of an organization’s security policies, leaving the company vulnerable to a range of threats.
When employees use unauthorized AI tools, they can inadvertently expose sensitive organizational data to external platforms or services that do not have the appropriate security measures in place. This can lead to data breaches, unauthorized access, and potential loss of IP or sensitive company data. The challenge is compounded by the fact that external AI tools are typically not integrated into an organization’s existing security infrastructure, making it hard to monitor their activities and ensure compliance.
A critical risk introduced by shadow AI is the possibility of malware infiltrating the organization. Employees may download or use AI tools from unverified sources, which could harbor malicious code. Once these tools are integrated into a company’s systems, they can act as vectors for malware, leading to severe data compromise and operational disruptions. The risk is not just theoretical; there already have been instances where unregulated AI tools have led to critical security incidents. A 2025 IBM report notes that 1 in 5 surveyed organizations experienced a cyberattack due to security issues with shadow AI. The IBM report goes on to note that breaches at firms with shadow AI cost, on average, $670,000 more than those without it.
Without proper oversight, sensitive corporate data can be input into public AI models, where it is used for model training or exposed through the vendor's own security vulnerabilities. A Samsung engineer, for example, leaked corporate secrets by using a free, public AI tool, which made the information accessible to other companies.
The challenge of tracking and managing AI usage outside of approved channels is a critical aspect of maintaining a secure environment. The sheer number of AI tools available, and the ease with which they are deployed, can quickly outpace the ability of IT and security teams to keep up. Without a clear understanding of what tools are being used and how, it’s nearly impossible to ensure that AI activities are aligned with a company’s security policies.
Comprehensive DevSecOps governance policies are essential as shadow AI continues to rise. These policies should proactively address the use of AI tools, educating employees on the risks and delineating the approved methods for integrating AI into their workflows. By cultivating a culture of security and establishing clear guidelines, organizations can diminish the ability of shadow AI to cause harm. Moreover, implementing monitoring and auditing processes can help identify and neutralize the use of unauthorized AI tools, maintaining the integrity of an organization's security framework.
A catalyst for security transformation
Nevertheless, the strategic use of AI can act as a powerful security force multiplier. For example, Black Duck Assist™, an AI-powered security assistant that works directly inside the developer's IDE, uses a large language model supercharged with decades of our own security insights to give developers
- Clear, simple explanations of complex vulnerabilities and recommendations for their priority and remediation
- Concrete, context-aware, AI-generated suggestions on how to fix a problem, including lines of code ready to be cut and pasted into the project, so developers can move quickly and avoid late-stage refactoring
By providing intelligent help right where the code is being written, Black Duck Assist not only helps developers fix issues faster but also acts as a continuous, on-the-job training tool. It turns AI from a source of risk into a powerful partner in building secure software.
An IP guardian
The shadow AI problem is especially tricky because many AI assistants inject small snippets of code that traditional software composition analysis (SCA) tools, which just look at declared dependencies, can completely miss.
Black Duck® SCA snippet analysis is uniquely built to solve this challenge. It can identify these small code fragments, match them to their original open source projects, and expose any associated license obligations that may put valuable IP at risk. Our massive KnowledgeBase™, sourced and curated by our in-house Cybersecurity Research Center (CyRC), catalogues more than 8.7 million open source components from over 57,700 forges and repositories.
Black Duck SCA snippet analysis can be activated via API and triggered with each commit. This allows snippet analysis to scale alongside AI coding assistants and eliminates the delays of batched analysis.
Ultimately, snippet analysis can help you ensure that revenue-generating projects or critical release branches don’t include licensed open source components that could result in asset write-downs or costly legal issues.
Tracking open source AI models in critical projects
The pressure to stay competitive is driving your development teams to integrate AI models into your business applications. However, building and training these models in-house requires significant resources and expertise. As a result, many organizations are turning to open source AI models as a practical solution.
Black Duck SCA detects and manages risks in open source AI models within your projects, enabling you to govern the use of AI models within your organization as well as include information about those models in your Software Bills of Materials (SBOMs).
By cataloguing each associated AI model card, Black Duck SCA provides developers with the necessary information and insight to make informed choices about the AI models they use in their applications, enabling them to
- Assess the suitability of a particular AI model for their specific use case
- Compare different models and choose the most appropriate one
- Identify potential issues or biases in the models and take corrective action
- Ensure compliance with organizational policies and regulatory requirements
Black Duck provides the industry's most comprehensive visibility into your software—whether written by your developers, generated by AI, or supplied by third-party vendors—and we deliver the informed, automated governance you need to manage security at the scale and speed AI-enabled pipelines require.
Download your copy of Black Duck’s report now to stay ahead of the AI security curve: “Balancing AI Usage and Risk in 2025 The Global State of DevSecOps”
Continue Reading
AI in DevSecOps: A critical crossroads for security and risk management
Oct 08, 2025 | 6 min read
Three steps to ensuring the reliability and security of your C++ projects
Jun 03, 2025 | 3 min read
How to secure AI-generated code with DevSecOps best practices
May 08, 2025 | 3 min read
Security automation and integration can smooth AppSec friction
Jan 23, 2025 | 6 min read
Overcome AST noise to find and fix software vulnerabilities
Jan 06, 2025 | 6 min read