In a recent webinar that I co-presented with Jay Lyman, principal cloud management and container analyst at 451 Research, we had the opportunity to discuss the realities and opportunities that exist in DevSecOps.
In the Q&A portion of the webinar, attendees posed questions about how to solve the problem of building security tools into a continuous integration / continuous delivery (CI/CD) pipeline without slowing down release cycles. Several questions also came through about how to manage false positives using such tools.
These questions made me realize that it’s time to address these issues in more depth. So let’s dig into the common security challenges we’re seeing firsthand and discuss how to address them effectively.
First, however, we should look at the DevSecOps Realities and Opportunities paper based on research conducted by 451 Research (commissioned by Synopsys) to provide more context. Many of the questions about common challenges that were brought up by webinar attendees are pain points identified by survey respondents.
When asked about the most significant application security testing challenges inherent in their CI/CD workflows, survey participants gave these responses:
Now, let’s address each of the challenges and see how we can solve them.
Nearly every application security tool has a command line interface (CLI) to integrate the tool into the CI/CD pipeline. However, simply having a CLI isn’t enough. Why? you ask. Well, when you’re building a pipeline, there are defined checkpoints in the pipeline where these application security tools run. Here’s a visual representation to show you what I mean:
Every activity runs at a predefined checkpoint. For instance, when pre-commit checks are complete, the pipeline starts, and an incremental static application security test (SAST) is run. Once that test is complete, the pipeline should move seamlessly to the next phase: software composition analysis (SCA) and a deeper, more thorough SAST assessment.
It’s important to note that when performing an incremental SAST assessment in the commit phase, if the tool identifies critical issues, without much configuration, your team must be able to break the build, updating the defect tracking system and metrics dashboard. This alone can be a challenge. Every tool has its own enterprise dashboard, and every tool has its own way of being configured to break the build. And some tools don’t have these capabilities at all.
When you’re selecting a tool, it’s critical to see whether the tool has these capabilities.
In the early 2000s, virtual machines became very popular. Many firms started installing AST tools within VMs. The primary reason for this was to run several operating systems on the same machine, allowing for easy maintenance and provisioning. Most application security activities ran in isolation, and even when the virtual machines took several minutes to bootstrap and get up and running, it wasn’t considered to be an issue. AST tools were usually not part of CI/CD. Even when they were, it wasn’t an issue, since applications were deployed to production maybe every quarter, six months, or year. Consequently, the release cycle was very long.
Fast-forward 15+ years, and we’re seeing VMs being replaced by containers and the cloud. Containers, as lightweight solutions, can be up and running in seconds. They are highly portable, they can be easily shared across teams and organizations, and container images can quickly be shipped across the world. However, not all tools work in a containerized environment. Key reasons for AST tools not running effectively in Docker containers include their size, the memory they require, and the many processors they need to run effectively. On the other hand, Docker images are lightweight, have a small memory footprint, and are super fast.
Identify tools and technologies that seamlessly work in containers and are easily deployed in the cloud. AST tools that can run in containers should not store any data in the containers, because once you finish running your AST tool, you are going to bring it down. You should be able to store the data in a shared location easily. The AST tool image is too large; if you need to install the AST tool and it has a footprint in gigabytes, you will not be able to scale the container. You should be able to update the license easily without hard-coding the license in the image itself. And last but not least, the IP addresses of containers change when you stop and start them, so the AST tool should not rely on the IP address of the container.
In addition to virtual machines, another common services challenge involves report formatting. For instance, while one tool may provide results in PDF and XML formats, another tool may produce reports in PDF and JSON formats. Still a third tool might provide PDF and HTML reports. This poses a huge challenge when it comes to customizing plugins for common activities such as defect tracking, updating metrics dashboards, and breaking the build.
When piloting an AST tool, identify whether the tool supports different report formats. That way you’ll be able to onboard the tool and customize your plugins easily.
AST tools take time to run. That’s a fact. SAST tools run through your entire codebase to identify vulnerabilities. Dynamic application security testing (DAST) tools spider through your applications to identify issues. SCA tools move through your entire codebase to identify issues with licensing and open source vulnerabilities.
Let’s take an example of an application with 500,000 lines of code (LOC). If you ran a full SAST scan, it would take anywhere between 30 and 45 minutes to run. DAST would take anywhere between 3 and 4 hours, and SCA would take from a few minutes to hours. This is assuming you were running a full scan every time a change went through your CI/CD pipeline. Let’s take two commercial SAST tools with differing scan times and see how they yield a different number of findings for the same set of applications.
The following tables capture information about the scan execution for a Java and a .NET application, just to showcase the results:
.NET | SAST Tool A | SAST Tool B |
Total LOC | 100,165 | 134,337 |
Files | 352 | 1,015 |
Scan time in minutes | 5:29 | 51:45 |
Total findings | 2,211 + 7,832 (info) | 1,405 |
Did you notice the difference in every aspect of the tools’ performance? From the way LOC was counted, to the total number of files, to the scan time, which was 10-fold more in Tool B, to the number of findings.
This isn’t a unique set of results. Let’s take a look at a Java example to see how both tools perform in the same manner as in the case of the .NET application.
Java | SAST Tool A | SAST Tool B |
Total LOC | 15,635 | 7,931 |
Files | 109 | 158 |
Scan time in minutes | 1:35 | 4:28 |
Total findings | 264 + 1,018 (info) | 310 |
Based on this, you can see that many tools don’t support incremental analysis. So you’ll have to plow through the codebase even if you change only a handful of files. DAST tools go through all pages, even though you may have changed only a single JSP file.
The most common and most important solution is to ensure that you configure your tools correctly. Your team needs to know the technology, language, and framework in use to make sure the correct rules are configured and customized. This is especially true for SAST tools.
RELATED: 5 steps to integrate SAST into your CI/CD pipeline
If you’re deploying to production multiple times each day, ensure you run incremental SAST in your pipeline as an inline activity. Run more extensive SAST operations as an out-of-band activity or asynchronously in your pipeline.
With tools, false positives are reminiscent of the “boy who cried wolf” story.
Security tools produce loads of findings. This doesn’t necessarily mean that there’s a great deal of risk. Additionally, more findings per KLOC (thousand lines of code) doesn’t necessarily mean one codebase is safer than another. You see, all tools suffer from false positives and false negatives. False negatives are more dangerous because they lead to a false sense of security. But when you integrate a tool into the CI/CD pipeline without proper knowledge of the application, language, or framework, false positives pose a major challenge.
Before automating the tool in your pipeline, you’ll need to onboard the application. Onboarding should take place for every application. This is a one-time effort to be performed by a security analyst, along with some input from the development team.
It’s also important to understand the context of the application. Knowing details about the application’s users, trust boundaries, sensitive information processed, security mechanisms implemented, input validation mechanisms in use, and so on will greatly increase your ability to eliminate false positives and determine the true severity of actual problems.
Not all SAST engines have the same accuracy. The semantic analyzer tends to report many false positives. The dataflow engine tends to be more accurate.
Developers are taught to emphasize functionality over security. It’s important to understand that many developers aren’t security experts. For this reason, it’s crucial to consistently support developers by reminding them of the risks of vulnerable code. Training materials are often static and inconvenient to access. Turning to the internet for guidance isn’t always consistent or reliable. Remediation advice from tools isn’t necessarily project-aware or product-specific. On the other hand, security experts are often seen as an impediment to business goals. Security experts may also not be experienced developers.
SAST is one of many checks in an application security assurance program designed to identify and mitigate security vulnerabilities in source code early in the DevSecOps process, helping organizations shift left in the SDLC. Automation of SAST tools is another important component of adoption, as it drives efficiency, consistency, and early detection while also enabling organizations to shift left.
Security tools like Code Sight work in the development environment to highlight potentially risky snippets of code before they ever leave the developer’s desktop. That means developers can learn about secure coding practices while they’re on the job, and they also get an incredibly tight feedback loop on the work they’ve already done.
Many customers I’ve worked with must comply with PCI or HIPAA. When you’re working to fine-tune or customize rulesets, regulatory compliance challenges are concern.
In scenarios such as these, it becomes highly important to differentiate between rules that are run in AST tools inline within the pipeline, and at a fast pace, and rules that are run out-of-band or asynchronously, initiated by the pipeline.
Based on the information we’ve covered in this post, I’d like to leave you with several takeaways that you can use to overcome common security challenges found in the CI/CD pipeline: