We Have Lost Perspective

For as long as humans have been writing software there have always been software bugs. The very nature of being human means we aren’t perfect and therefore the things we create can, and never will be, perfect. Twenty years ago these vulnerabilities were (slightly) easier to manage when new releases of code happened once or twice a year. The controlled release of code, using a waterfall methodology, meant security teams had time to run scans and identify critical issues before the code release happened.

Fast forward two decades later and now we live in not just an interconnected digital world, but software literally touches everything we do with releases sometimes happening multiple times a day. With developers easily outnumbering security teams, and delivering code in a more agile way, a change was needed to make security tooling more accessible to developers (and how they work).

We searched for a solution to this problem, but in the end two different schools of thought emerged; those who wanted to shift left and those who wanted to shield right. By shifting left we were able to empower developers to get more security scan results with minimal oversight from the security team. By shielding right, security teams still had control and protection over systems running in production. While both approaches bring a different flavor of security into the mix, together they created a different problem…silos.

Shifting Left

In theory, the idea of shifting left is a no-brainer. If we just close more code vulnerabilities earlier we won’t have to deal with as much noise in security scans and the engineering teams will be happier because they can ship code faster with minimal interference from security tools. Boom…problem solved! The reality however is a little more nuanced than that. Let’s explore three key challenges with this approach.

First, developers aren’t (usually) security experts. If we discovered a dozen critical vulnerabilities in a security scan and brought those findings back to the development team to remediate…chances are they are going to need to do some research to understand the root of the issue first. Some vulnerabilities are more complex than others (especially when it comes to different programming languages) so the fix might be as simple as sanitizing some input or as complex as fully refactoring the code. This requirement for developers to understand how the security issue was introduced in the first place creates additional “work” for both the broader engineering team and the individual developer.

Second, not every vulnerability is going to be accurate. Since the majority of software vulnerabilities are found using scanning tools, the accuracy of the findings can vary (significantly) based on several factors relating to the underlying scan tool. If the engineering team goes through our earlier dozen critical vulnerabilities only to find that 80% of them are false positives; not only will they be pissed they spent the time trying to prove they were false positives, or remediate them, but they will have less trust in any future issues being reported by the scanning tools (which further sows discord with the security team).

Finally, just because a vulnerability was discovered doesn’t mean it is exploitable. This is a really important point! When you run most security tooling, whether it be within a developer’s IDE, the build pipeline, or just a code repository…the scanner is going to return all the results it can find. These “signals” help you get a cursory look at how pervasive vulnerabilities are across your organization. However, what’s often missing from the results is which of the vulnerabilities are actually exploitable within your production environment (you know, the place where external users actually have access to your application). If the collective total of the scan results show 5,000 vulnerabilities of varying criticalities, but in reality only 25 of them are actually exploitable in production…then all of those signals quickly just become noise.

To be clear, the concept of shift left can have a huge (positive) impact on reducing risk within the enterprise; however it isn’t a silver bullet. The success of shifting left relies heavily on company culture and the ability to provide security within the Software Development Lifecycle (SDLC) without impacting the Developer Experience (DevEx). This means that Development, Security, and Operations teams all need to work together more collaboratively for the shift left approach to be successful. Let’s see if we have any better luck shielding right.

Shielding Right

Similar to finding issues within application code, we also want to be able to detect and remediate issues within our runtime systems too. This includes everything from cloud configurations, to network configurations, to the individual systems running in a given environment. The ultimate goal is to “shield” ourselves from malicious actors while allowing legitimate users access to our applications. Shielding right however isn’t without its own challenges.

First, we have a similar signal vs noise problem. This is particularly true when it comes to cloud resources which are a core component of most modern applications. Imagine you scan a cloud environment for vulnerabilities and several issues are found. By the time you investigate the issues, identify which cloud assets are vulnerable, open a Jira ticket, and assign the ticket to the DevOps team…the vulnerable assets have already changed from a new release going out. Due to the dynamic nature of cloud environments, especially containers, the signals can once again quickly become noise.

Second, tracing an application vulnerability back to its source is painful if not impossible. Let’s say you discover a vulnerability in one of your web applications during a routine security check. From a security perspective the vulnerability might be clear and straightforward. From a development perspective, not so much. Is the issue with the front-end code or the back-end? Which repository does that code base live in? Did we write this code internally or was it a contractor? Being able to trace a vulnerability back to the source is becoming a critical requirement, but one that not many enterprises can effectively do today.

We once again find ourselves in a quandary because similar to the shift left approach, shielding right requires close collaboration of Development, Security, and Operations teams.

Interconnected Security

It is becoming quite clear that in order for organizations to reduce their overall risk, they will need to transform the way that they do DevSecOps today (if they are doing it at all). This type of transformation requires close interconnection of teams and tools to ensure security can be effectively delivered from the first line of code to the runtime environment where that code ultimately will end up.

Breaking down the silos that often exist between Development, Security, and Operations teams allows for a more unified approach to DevSecOps across the enterprise. The notion that context can be shared from left to right and vice versa creates a complete, interconnected picture of security vs the partial view teams have been working with previously. Vendor solutions that can help unlock this missing context can help create a lasting impact on risk reduction in multiple ways while helping the DevSecOps teams work better together.

This guide is just the first in a series that will explore how we can create interconnected security by bringing together application security and cloud security in a way that creates a holistic picture of applications, their security posture, and the risk associated with them. Along the journey we will aim to transform how enterprises adopt DevSecOps today and what they can expect it to look like in the near future.

Return Home →