CARVIEW |
Get the best from AI in software development without risking the worst
Sep 15, 2025 | 5 min read
Subscribe
AI has become as indispensable to software development. Ninety percent[i] of organizations report using AI coding assistants such as Copilot and Claude Code. Over 96% of organizations are using open source AI models to power core functions like data processing, computer vision, and process automation in the software they ship. And one-fifth of organizations prohibit AI tools but know their developers are using them anyway.
There’s no doubt that AI helps developers code faster. But AI coding tools just create code that mimics patterns observed in open source projects and other publicly available code. Traditionally, AI code generators are trained to prioritize functional code—security is often no more than a happy coincidence, and software license compliance is just a suggestion. So how can you get the best of AI without exposing your organization to the worst?
Ultimately, as with any DevSecOps initiative, you need the efforts of contributors from both development and security aligned in strategy to achieve defined goals. If dev seeks speed, it must come along with the security controls AppSec seeks and the IP protections sought by legal. That’s a lot of moving parts. What could go wrong?
[i] All statistics quoted in this blog post are from the 2024 Black Duck “Global State of DevSecOps” report
AI coding assistants: What could go wrong?
AI code generators like Copilot are known to introduce vulnerabilities in one-third of the code they generate, according to a Cornell University study. Not only that, when attempts are made to have these tools fix the issue they created, they introduce new ones 42% of the time (same study).
AI coding assistants can generate code very quickly, and at great scale. This can flood pipelines with potentially vulnerable or weak code and accrue massive backlogs for AppSec review. Two major issues that you absolutely DO NOT want AI tools to introduce at scale are improper input validation and OS command injections.
- Improper input validation occurs when code is implemented that doesn’t properly validate user inputs. For example, Copilot might suggest code that doesn’t check for SQL injection or cross-site scripting vulnerabilities, making the application susceptible to these types of attacks.
- OS command injections allow attackers to execute arbitrary commands. For example, if an AI tool suggests a command that directly interacts with the operating system, it might not include the necessary input sanitation or validation. And without it, attackers can use common techniques to exploit this unchecked method and propagate an attack.
Avoiding such issues is a basic best practice for secure coding, but they are nonstandard considerations for AI coding assistants.
What about license risks?
AI coding assistants have to be trained on something to be able to produce functional code for a given project in a given language. More often than not, this training is based on open source projects, which typically carry specific licensing obligations. While there are many licenses, some are more potentially detrimental to the business, such as those compelling the free release of any work derived from the included code or component.
If developers use AI-generated code without understanding the licensing terms associated with it, they run the risk of unintentionally “open sourcing” their proprietary code, devaluing intellectual property and opening the organization to legal implications.
Three steps to take right now to reduce AI risk
The problem isn’t AI. It’s how your developers are using it. If we implicitly trust something we don’t understand, we open ourselves to potentially devastating consequences. Not just for ourselves, but for our customers and partners. Four steps can help you prevent these consequences without taking AI away from your developers.
Automate security checks
Automating security scans is essential for timely, consistent, and repeatable results. This is particularly important as AI coding assistants are increasingly being used semiautonomously and pushing code through pipelines quickly. Automation should balance security coverage with pipeline speed, and trigger only necessary tests based on pipeline actions.
Set up automated security scans to
- Detect new code. CI/CD pipelines with automated scanning can be integrated into version control systems (e.g., Git) to monitor changes in real time, thus detecting new code as it’s generated.
- Add code to test queues. Once new code is detected, automatically adding it to a test queue ensures that all AI-generated code is subject to comprehensive testing as quickly as possible. This avoids burdening AppSec teams and risking undesirable compromises due to time restrictions.
- Run AST scans. Automating application security testing (AST) in CI/CD pipelines and IDEs ensures early detection of issues, when it’s easiest and least costly to fix them.
- Inform and assist developers. Prioritize issues for remediation based on risk-tolerance policies, and close feedback loops with developers via their preferred issue management workflows (e.g., Jira tickets, fix pull requests). Automate the process of providing developers with clear guidance on what to fix and how to fix it.
Cultivating security-capable developers should be a persistent background activity amid all this. Developers must be able to cross-check the output of AI code generators as well as fix issues detected in later stages. Additionally, train your AI models to recognize and avoid security risks before they produce them. Automate this by incorporating security best practices and rules into the training data and algorithms used by your AI.
Implement snippet scanning
Snippet scanning can quickly identify and address potential software license conflicts before they are propagated across projects. Implement mechanisms to automatically detect small excerpts of code (snippets) sampled from licensed open source components.
Integrate snippet scanning tools into dependency management systems, or initiate their analysis with every code commit to keep pace with faster, AI-enabled pipelines. This will help you maintain an up-to-date and accurate inventory of third-party components.
Adopt AI in phases
Instead of a big-bang approach in which AI is rolled out everywhere all at once, go with a step-by-step process that gradually integrates AI into your workflows. This allows you to manage risks, optimize resources, and ensure that your AI solutions are effectively aligned with your business needs.
- Restrict AI access to certain teams based on their readiness and the critical nature of their projects. For instance, teams working on an application that’s not customer-facing or business-critical, or those operating in business units or global regions subject to lesser regulatory scrutiny, would be good starting points.
- Ensure the selected teams have robust mitigation controls in place. This includes established protocols, code review processes, and testing frameworks. Automatically enforce security gates that can act quickly if AI tools violate policies.
- Use pilot projects to evaluate the impact of AI on your development processes. Monitor key metrics such as code quality, security vulnerabilities, and developer productivity to assess the effectiveness—and risk—of your AI tools.
- Establish a feedback loop that allows teams to provide firsthand insights and recommendations to new teams adopting AI tools. The feedback is also useful for refining your AI strategy and addressing issues before broader deployment.
Gradually expand your use of AI tools based on the success and learnings from initial teams. This phased approach allows for controlled scaling and continuous improvement. A phased approach also gives you the opportunity to implement security controls to mitigate risk. These controls should include
- Training and education for developers on potential risks associated with AI-generated code
- Security tools and practices such as those discussed in this blog post
- Code review policies to ensure all code is reviewed by experienced developers with security and compliance expertise
- Testing frameworks to validate AI-generated code including unit tests, integration tests, and security tests
- Incident response plans for handling breaches as well as issues related to AI-generated code
- Regular audits of your AI-generated code and development processes to identify gaps in controls and ensure ongoing compliance
- Collaboration with security teams so they can provide guidance to help your developers understand and implement best practices
Black Duck is here to help
Black Duck has a proud history of helping organizations all over the world secure their software. We led the way in the use of open source code, helping developers to incorporate it safely, securely, and legally in their own projects. Now, we’re defining the next frontier of application security—one defined by AI-enabled development pipelines and expanding regulations.
Continue Reading
AI in DevSecOps: A critical crossroads for security and risk management
Oct 08, 2025 | 6 min read
Three steps to ensuring the reliability and security of your C++ projects
Jun 03, 2025 | 3 min read
How to secure AI-generated code with DevSecOps best practices
May 08, 2025 | 3 min read
Security automation and integration can smooth AppSec friction
Jan 23, 2025 | 6 min read
Overcome AST noise to find and fix software vulnerabilities
Jan 06, 2025 | 6 min read