Addressing the hidden risks of AI coding tools

Corey Hamilton

Authored by Corey Hamilton

Aug 18, 2025 / 1 min read

AI coding assistants are revolutionizing the software development landscape, boosting productivity by up to 26.08%, according to a recent study by Princeton University, MIT, Microsoft Corp., and the University of Pennsylvania.

Unfortunately, that efficiency comes with significant risks. Recent studies have shown that

  • Approximately 48% of code snippets produced by AI coding assistants contain memory-related bugs that could lead to malicious exploitation
  • Popular AI coding assistants like ChatGPT, GitHub Copilot, and Amazon CodeWhisperer generate correct code only 65.2%, 46.3%, and 31.1% of the time, respectively
  • Developers relying on these tools tend to write less-secure code while being more confident in its security🤦‍♂️

In other words, the time is now to implement tools that mitigate the risks associated with AI-generated code.

Our latest guide, “Strategies for AI-Powered Software Development,” explores the risks introduced by AI coding assistants and offers proven mitigation strategies you can implement today. To learn more, download the guide.

Continue Reading
Explore Topics