CARVIEW |
Addressing the hidden risks of AI coding tools
Aug 18, 2025 | 1 min read
Subscribe
AI coding assistants are revolutionizing the software development landscape, boosting productivity by up to 26.08%, according to a recent study by Princeton University, MIT, Microsoft Corp., and the University of Pennsylvania.
Unfortunately, that efficiency comes with significant risks. Recent studies have shown that
- Approximately 48% of code snippets produced by AI coding assistants contain memory-related bugs that could lead to malicious exploitation
- Popular AI coding assistants like ChatGPT, GitHub Copilot, and Amazon CodeWhisperer generate correct code only 65.2%, 46.3%, and 31.1% of the time, respectively
- Developers relying on these tools tend to write less-secure code while being more confident in its security🤦♂️
In other words, the time is now to implement tools that mitigate the risks associated with AI-generated code.
Our latest guide, “Strategies for AI-Powered Software Development,” explores the risks introduced by AI coding assistants and offers proven mitigation strategies you can implement today. To learn more, download the guide.
Continue Reading
Three steps to ensuring the reliability and security of your C++ projects
Jun 03, 2025 | 3 min read
How to secure AI-generated code with DevSecOps best practices
May 08, 2025 | 3 min read
Security automation and integration can smooth AppSec friction
Jan 23, 2025 | 6 min read
Overcome AST noise to find and fix software vulnerabilities
Jan 06, 2025 | 6 min read
Artificial intelligence widens the gap between security and development
Dec 01, 2024 | 7 min read