All applications use APIs (calls to the kernel, software development kits, cryptographic libraries, and SOAP, to name a few examples)—they’re nothing new. What vendors call “API security” today refers to a subset of those APIs—those exposed over a network. By their very nature, these network-exposed APIs enable the free flow of information and interaction between software components. Attackers have new opportunities to poke and prod these components of your systems with the exposure of endpoints to public, cloud, and private networks. We have seen high-profile breaches at several well-known companies (USPS, T-Mobile, and Salesforce to name a few) stemming from the exposure or use of insecure API endpoints. This raises the question, How do you know whether your software security initiative addresses the controls you need to ensure the APIs you use and produce are secure? To answer that question, you first need to define “API security.”
API security is the protection of network-exposed APIs that your organization both produces and consumes. Of course, this means the use of common security controls germane to APIs: rate limiting and the authentication and authorization of users, services, and requests. It also means understanding data provenance and, when looking at composed systems, where exactly to seek context during design or review discussions. For leaders it means that application security programs capture and apply activities to software exposing or using APIs at the right time. More than just buying some new tools, robust API security stems from a culture of security, with activities across the software security initiative.
Popular software development trends like microservices architectures have driven the unit of software relevant to an SSI from an “application” (or monolith) to many subcomponents that expose APIs, with their own life cycles and contracts to uphold—as well as security controls that must exist. Software security leaders can find opportunities for improvement in the following areas:
APIs are used between front-end clients (thick clients, browsers) and back-end systems, as well as between back-end components. Muddling further, a single API endpoint may end up serving a mix of both front-end and back-end requests. When individual API endpoints are exposed to a variety of known and unknown callers (consumed, composed, or wrapped upstream by gateways or load balancers), it is difficult to determine what security controls an individual API endpoint must enforce. One decision that application security leaders can make is to push for APIs that explicitly document the assumed security responsibilities for both provider and consumer.
Architects are also faced with identifying cross-cutting problems for APIs. Security leaders should pay attention to those security efforts, like unifying access control, as well as those close to the business logic, like unifying customer identity.
Regarding security controls, there are several levels of abstraction within API security: controls within the business logic (protections against abuse stories), controls protecting the business logic (authentication and authorization), and finally architecture security controls that are enabled or defined by the architecture (API gateways, microsegmentation).
Security controls enabled by architectural decisions are relatively new to application development in the context of API security. Outside of security controls applied to business logic, these extend to concerns such as velocity checking, authentication, and authorization decisions. We are also left to wonder about how best to isolate a cluster of APIs, where important security controls can be enabled by a gateway. For example, does microsegmentation make the cut? How effective are service mesh provided controls?
Some architectural decisions seek to provide chokepoints, which allow security architects greater insight into these distributed systems. While some architectural decisions demand a centrally managed approach, others enable an endpoint-enforced approach. Others are a free-for-all. Leaders must also contend with and consider the claims from vendors entering the market with new application firewalls and data loss prevention (DLP) mechanisms.
We recommend threat modeling, of course. Application security leaders must begin the process of identifying the risks to various types of API (first party, third party, client, or consumer), key controls for each API endpoint, acceptable solutions for problems posed by API-heavy architectures (like microservices), and whether to buy into vendor claims as part of a risk management program.
Leaders need to generate visibility into the organization’s API footprint; measure efforts to cover that footprint with process and tools; track, record, and prioritize ongoing security activities; and provide rich context for various types of security analysis. When we talk to program owners about API security, we often see that existing inventory solutions just do not generate this insight. Program owners can carefully consider whether existing inventory solutions can be adapted, or if new solutions must be adopted.
Filling up your inventory with accurate information is another matter entirely. Organizations can source some information from development groups, but they should invest in discovering process escapes. Sources of this information include sensors deployed on client and service codebases (or binaries or live instances), network inspection, OSINT techniques, and pure black box discovery. At the end of the day, you should be able to plumb the results of discovery sensors into your inventory and act on those APIs you know nothing about.
Security testing today is as relevant to generating insight into the effectiveness of upstream software security practices as it ever was. Security testing of APIs poses new challenges for manual, automated, and hybrid activities. Context is one such gap; a tester who receives an API but lacks the ability to form inputs or intuit a threat model won’t be able to find the types of high-value problems that challenge SSIs to improve. Certainly tools do not fare much better.
Static analysis tools, which are effective at identifying software security issues specific to languages, or well-understood classes of injection attack, continue to be effective against API-heavy codebases, but only if those tools also model the libraries and platforms used to expose those API routes. Static code analysis tools have never done a great job of discovering business logic flaws. Heavy-API projects require a cross-codebase reasoning capability that further exacerbates this gap. Static analysis tools are still an important tool in the toolbox, but leaders should take on evaluating a tool’s ability to find defects in code written with their organization’s most popular API platforms. Luckily, organizations that have adopted static analysis approaches to drive the adoption of security controls (like the use of authentication and authorization libraries) will find their strategies still work for API security.
Typical approaches to dynamic analysis that can generate coverage of an API include testing with a client (or harness), testing with behavioral tests, and testing with a specification. The solution here isn’t to build one and force development teams to funnel into a testing tool, but to support the variety of testing arrangements possible. Leaders should hunt for ways to incentivize projects to adopt practices that increase testability; cost and speed are two great places to start.
Modern applications and systems rely on complex systems of APIs exposed through a variety of public and private networks. We can take a few steps to understand how these changes impact various elements of our software security initiatives—and make sure that security is built into software exposing or consuming APIs at the right place and the right time.