For more than a decade, software security has evolved gradually—new tooling here, a policy tweak there, incremental cultural shifts toward DevSecOps. But with the rise of Generative AI and large language models (LLMs), that era is over. Application security (AppSec) isn’t evolving anymore. It is being fundamentally rewritten.
The BSIMM16 report provides the clearest industrywide snapshot yet of how AI is reshaping software security—across development, testing, compliance, governance, and even organizational culture. The data-driven Building Security in Maturity Model (BSIMM) shows how leading organizations actually build and run their software security programs. Instead of prescribing best practices, it documents 128 real-world software security activities observed across more than 100 firms, giving teams a clear, evidence‑based way to benchmark their maturity and prioritize improvements—especially as AI, supply chain risk, and automation reshape AppSec.
And the message is unmistakable: AI is driving the most significant shift in AppSec since the move to cloud-native architectures.
Organizations that embrace this shift will accelerate innovation and reduce risk. Those that don’t will find themselves facing vulnerabilities they can’t see, threats they don’t understand, and regulatory obligations they can’t meet.
For years, developers relied on intuition, experience, and pattern recognition to make secure coding decisions. AI changes this dynamic entirely.
BSIMM16 makes it clear that LLM‑generated code is not secure by default—even if it looks clean, idiomatic, and professional. It often omits crucial security controls or introduces subtle logic vulnerabilities that automated scanners weren’t designed to detect. This creates a paradox: AI accelerates development dramatically, but it also accelerates the introduction of hard‑to‑spot vulnerabilities. As a result, organizations are forced to expand their threat models to include
The firms leading the way are already investing in AI‑specific attack intelligence and developing technology‑specific attack patterns that account for this new paradigm.
AI isn’t just a technical disruption—it’s a governance disruption.
Regulators around the world are raising expectations for software security, and AI‑driven development is accelerating that pressure. BSIMM16 shows significant growth in security activities that help organizations prove the trustworthiness of their development environments, including
The EU Cyber Resilience Act, U.S. government self‑attestation requirements, and similar initiatives worldwide are sending the same message: If AI touches your software, you must be able to prove you built it securely.
Organizations that treat AI as an “experiment” rather than a regulated software component risk falling behind—and falling out of compliance.
One of the strongest signals from BSIMM16 is the explosive growth in automation across the software supply chain.
Why? Because manual review simply cannot keep pace with AI‑accelerated development velocity.
AI writes code at machine speed. Security teams cannot defend it at human speed. The future of AppSec belongs to organizations that move from manual enforcement to continuous, automated, verifiable controls.
BSIMM16 identifies a dramatic cultural shift in training: Traditional classroom education is giving way to short‑form, context‑specific, just‑in‑time learning—a shift driven largely by AI adoption.
The activity “Provide expertise via open collaboration channels” grew 29%, reflecting a move toward
This mirrors how developers use AI: not through long lectures, but through ambient, on‑demand guidance that blends seamlessly into their workflow.
Security knowledge must now move at the same speed as AI‑assisted coding.
Perhaps the most compelling insight from BSIMM16 is how leading organizations are restructuring their software security initiatives.
These organizations are not simply “adopting AI.” They are transforming their security programs to enable AI safely and at scale.
AI adoption is not slowing down. Code generation is only the beginning. Soon AI will
The organizations that thrive will be those that build AI‑ready software security programs today that
The BSIMM16 data is unambiguous: AI-driven development requires AI-driven security models. Those that fail to adapt will be left defending systems built faster—and broken faster—than they can secure.
Feb 05, 2026 | 6 min read
Jan 22, 2026 | 3 min read
Dec 16, 2025 | 4 min read
Oct 08, 2025 | 6 min read
Jun 03, 2025 | 3 min read