You can’t keep things private if they’re not secure.
That doesn’t mean data privacy laws are misnamed. The EU’s General Data Protection Regulation (GDPR)—by far the most famous to date—is very much focused on privacy. So are multiple others at both the state and federal level in the U.S. (some proposed, some pending, some enacted). The California Consumer Privacy Act (CCPA) took effect at the start of this year.
Those laws, in general, contain directives on what data can be collected, how long it can be saved, and how it can be “shared” with “partners.” They also give users various levels of power over how their data is collected and used, plus the right to have it deleted. The CCPA also bans companies from providing a lower level of service to users who decline to have their information collected and shared.
But cyber security is a crucial—even existential—component of privacy. If an organization is breached and loses control of the personal data of its customers or users, compliance with all those privacy directives goes out the window.
That means security failures are likely to become increasingly painful. Yes, data breaches have always been painful, but they have not necessarily crippled organizations—witness mega-retailer Target’s recent third-quarter report of exceeding expectations for earnings, profit, revenue, and share price, just six years after one of the most catastrophic breaches of the past decade.
But if the company had faced privacy penalties along with everything else, its recovery likely would have taken longer and been less robust.
The GDPR’s stiffest existing penalty for violations is 4% of annual revenue. Not profit, not earnings—total revenue. Simple math will tell you that potential fines could easily reach into the multiple billions for the world’s biggest corporations.
And some legislation pending or in the works would take it beyond money. The Mind Your Own Business Act proposed by U.S. Sen Ron Wyden, D-Ore., contains provisions for fines comparable to the GDPR but also calls for “prison terms for executives at corporations that misuse Americans’ data and lie about those practices to the government.”
The CCPA calls for fines ranging from $2,500 to $7,500 per violation, depending on whether a violation was inadvertent or deliberate, although enforcement will not begin until July 1.
While various exceptions could affect the severity of those fines, including “the nature, persistence, length, willfulness, and seriousness of the misconduct,” the potential is again into the billions—even hundreds of billions.
The good news is that most impending legislative initiatives recognize the security component of privacy.
Wyden’s bill would “empower” the Federal Trade Commission to “establish minimum privacy and cybersecurity standards.”
New York’s Stop Hacks and Improve Electronic Data Security Act (the SHIELD Act), signed into law this past July, includes new “data security protections,” due to take effect March 21, 2020.
Among its requirements are “to be in compliance with other applicable cybersecurity laws, such as the Gramm-Leach-Bliley Act, focused on financial institutions; HIPAA (Health Insurance Portability and Accountability Act), focused on healthcare; or the Cybersecurity Requirements for Financial Services Companies promulgated by the New York Department of Financial Services.”
Alternatively, organizations can implement a data security program that includes “reasonable administrative, technical and physical safeguards.” That declaration is followed by a list of specifics that gives some definitions of “reasonable.”
For example, the “technical” safeguards require that the organization “assess risks in network and software design; assess risks in information processing, transmission and storage; detect, prevent and respond to attacks or system failures; and regularly test and monitor the effectiveness of key controls, systems and procedures.”
Failure to comply with some privacy laws could eventually draw more than government penalties. Alastair Mactaggart, founder and chair of Californians for Consumer Privacy, wants to take the penalties in the CCPA a step further. The law should, he argues, enable consumers to sue businesses if “email address plus password” are stolen due to the organization’s negligence. This would “help cut down on identity theft by encouraging businesses to invest in good security.”
So is all this a signal that a new, golden age of cyber security is approaching?
Don’t hold your breath. As noted above, even the laws that threaten major fines for privacy violations contain provisions that could drastically reduce those penalties if an organization cooperates with regulators and/or makes improvements after the fact.
The GDPR has been in effect for 18 months now, and the biggest “proposed” fine so far is $230 million against British Airways. Yes, that’s a lot of money, but it’s still only about 1.3% of the company’s annual revenue of $16.6 billion.
Another reason is that even the specifics of security requirements remain vague. There is enough wiggle room to guarantee that there will be numerous and lengthy appeals of proposed penalties.
As Adam Brown, associate managing consultant at Black Duck, puts it, “When it comes to prescriptions for software security, these policies either say something along the lines of ‘do software security taking into account the state of the art’ or, worse still, ‘reasonable administrative, technical and physical safeguards.’”
Beyond that, Brown said, “the reality is that those to whom this responsibility falls will likely have limited knowledge and experience of software and will possibly confuse software security with security software. So the latest technology that claims to offer some control over risk coming from a lack of software security will be seen as a solution.”
“Unfortunately there is no crypto fairy dust. The only way to have secure software is with a deliberate software security initiative,” he said.
Ironically, that ought to be really good news. There is indeed no crypto fairy dust or magic tool to guarantee security. But organizations can avoid the potential legal fees associated with appealing penalties for failure to comply with increasingly strict privacy laws. The solution: spend that money—or even less—on a substantive software security initiative (SSI) to help protect their data.
As Ian Ashworth, sales engineer at Black Duck, notes, software is at the heart of better security since “applications are becoming the favored targets for cyber attack.”
“I am sure organizations providing software security solutions would welcome more attention being drawn to these risks and happily work with governments to formulate guidance around best practices,” he said.
The key, as experts have been saying for some time now, is to “shift left”—conduct security testing from the beginning and throughout the software development life cycle (SDLC). The tools to do that include SAST (static application security testing), DAST (dynamic application security testing), IAST (interactive application security testing), RASP (runtime application self-protection), and penetration testing, all of which help deliver a product that, while it won’t be bulletproof (nothing is), will take an organization out of the “low-hanging fruit” category.
Beyond that, a project launched more than a decade ago and now under the Black Duck umbrella called the Building Security In Maturity Model (BSIMM) delivers an annual report on SSIs in multiple industry verticals. That project, a self-described “measuring stick” for software security, includes 122 participating companies, mainly in eight verticals.
It is not a “how-to” or “what-to-do” aiming to recommend that every organization do software security the same way. Instead, it measures what organizations are doing, in terms of 119 activities grouped into 12 practices, to secure their software.
The report is available for free under the Creative Commons license.
In short, better software security is possible. Without it, organizations will find it difficult to impossible to comply with the growing wave of privacy laws.
Building Security In Maturity Model (BSIMM) is a data-driven model developed through analysis of real-world software security initiatives. The BSIMM report represents the latest evolution of this detailed model for software security.