Security has become ever more important in the development process, as vulnerabilities last year caused the 2nd, 3rd and 7th biggest breaches of all time measured by the number of people that were affected.
This has exposed the industry’s need for more effective use of security tooling within software development as well as the need to employ effective security practices sooner.
Another factor contributing to this growing need is the prominence of new attacks such as next-generation software supply-chain attacks that involve the intentional targeting and compromising of upstream open-source projects so that attackers can then exploit vulnerabilities when they inevitably flow downstream.
The past year saw a 430% increase in next-generation cyber attacks aimed at actively infiltrating open-source software supply chains, according to the 2020 State of the Software Supply Chain report.
“Attackers are always looking for the path of least resistance. So I think they found a weakness and an amplifying effect in going after open-source projects and open-source developers,” said Brian Fox, the chief technology officer at Sonatype. “If you can somehow find your way into compromising or tricking people into using a hacked version of a very popular project, you’ve just amplified your base right off the bat. It’s not yet well understood, especially in the security domain, that this is the new challenge.”
These next-gen attacks are possible for three main reasons. One is that open-source projects rely on contributions from thousands of volunteer developers, making it difficult to discriminate between community members with good or bad intentions. Secondly, the projects incorporate up to thousands of dependencies that may contain known vulnerabilities. Lastly, the ethos of open source is built on “shared trust,” which can create a fertile environment for preying on other users, according to the report.
However, proper tooling, such as the use of software composition analysis (SCA) solutions, can ameliorate some of these issues. SCA is the process of automating the visibility into open-source software (OSS) for the purpose of risk management, security and license compliance.
DevOps and Linux-based containers, among other factors, have resulted in a significant
increase in the use of OSS by developers, according to Dale Gardner, a senior director and analyst on Gartner’s Digital Workplace Security team. Over 90% of respondents to a July 2019 Gartner survey indicate that they use open-source software.
“Originally, a lot of these [security] tools were focused more on the legal side of open source and less on vulnerabilities, but now security is getting more attention,” Gardner said.
The use of automated SCA
In fact, the State of the Software Supply Chain report found that high-performing development teams are 59% more likely to use automated SCA and are almost five times more likely to successfully update dependencies and to fix vulnerabilities without breakage. The teams are more than 26 times faster at detecting and remediating open-source vulnerabilities, and deploy changes to code 15 times more frequently than their peers.
The high-performer cluster shows high productivity and superior risk management outcomes can be achieved simultaneously, dispelling the notion that effective risk management practices come at the expense of developer productivity, the report continued.
The main differentiator between the top and bottom performers was that the high performers had a governance structure that relied much more heavily on automated tooling. The top teams were 96% more likely to be able to centrally scan all deployed artifacts for security and license compliance.
“Ideally, a tool should also report on whether compromised or vulnerable sections of code — once incorporated into an application — are executed or exploitable in practice,” Gardner wrote in his report titled “Technology Insight for Software Composition Analysis.” He added, “This would require coordination with a static application security testing (SAST) or an interactive application security testing (IAST) tool able to provide visibility into control and data flow within the application.”
Gardner added that the most common approach now is to integrate a lot of these security tools into IDEs and CLIs.
“If you’re asking developers ‘I need you to go look at this tool that understands software composition or whatever the case may be,’ that tends not to happen,” Gardner said. “Integrating into the IDE eliminates some of the friction with other security tools and it also comes down to economics. If I can spot the problem right at the time the developer introduces something into the code, then it will be a lot cheaper and faster to fix it then if it were down the line. That’s just the way a lot of developers work.”
Using SCA for looking at licenses and understanding vulnerabilities with particular packages are already prominent use cases of SCA solutions, but that’s not all that they’re capable of, according to Gardner.
“The areas I expect to grow will have to do with understanding the provenance of a particular package: where did it come from, who’s involved with building it, and how often it’s maintained. That’s the part I see growing most and even that is still relatively nascent,” Gardner said.
The comprehensive view that certain SCA solutions provide is not available in many tools that only rely on scanning public repos.
Relying on public repos to find vulnerabilities — as many security tools still do — is no longer enough, according to Sonatype’s Fox. Sometimes issues are not filed in the National Vulnerability Database (NVD) and even where these things get reported, there’s often a two-week or more delay before it becomes public information.
“So you end up with these cases where vulnerabilities are widely known because someone blogged about it, and yet if you go to the NVD, it’s not published yet, so there’s this massive lag,” Fox said.
Instead, effective security requires going a step further into inspecting the built application itself to fingerprint what’s actually inside an application. This can be done through advanced binary fingerprinting, according to Fox.
The technology tries to deterministically work backwards from the final product to figure out what’s actually inside it.
“It’s as if I hand you a recipe and if you look at it, you could judge a pie or a cake as being safe to eat because the recipe does not say insert poison, right? That’s what those tools are doing. They’re saying, well, it says here sugar, it doesn’t say tainted sugar, and there’s no poison in it. So your cake is safe to eat,” Fox said. “Versus what we’re doing here is we’re actually inspecting the contents of the baked cake and going, wait a minute. There’s chromatography that shows that there’s actually poison in here, even though the recipe didn’t call for it and that’s kind of the fundamental difference.”
There has also been a major shift from how application security has traditionally been positioned.
In many attacks that are happening now, the developers and the development infrastructure is the target. And while organizations are so focused on trying to make sure that the final product itself is safe before it goes to customers and to the server, in the new world, this is irrelevant, according to Fox. The developers might have been the ones that were compromised this whole time, while things were being siphoned out of the development infrastructure.
“We’ve seen attacks that were stealing SSH keys, certificates, or AWS credentials and turning build farms into cryptominers, all of which has nothing to do with the final product,” Fox said. “In the DevOps world, people talk a lot about Deming and how he helped make Japan make better, more efficient cars for less money by focusing on key principles around supply chains. Well, guess what. Deming wasn’t trying to protect against a sabotage attack of the factory itself. Those processes are designed to make better cars, not to make the factory more secure. And that’s kind of the situation we find ourselves in with these upstream attacks.”
Now, effective security tooling can capture and automate the requirements to help developers make decisions up front and to provide them information and context as they’re picking a dependency, and not after, Fox added.
Also, when the tooling recognizes that a component has a newly disclosed vulnerability, it can recognize that it’s not necessarily appropriate to stop the whole team and break all the builds, because not everyone is tasked with fixing every single vulnerability. Instead, it’s going to notify one or two senior developers about the issue.
“It’s a combination of trying to understand what it takes to help the developers do this stuff faster, but also be able to do it with the enterprise top-down view and capturing that policy — not to be Big Brother-y — but to capture the policy so that when you’re the developer, you get that instant information about what’s going on,” Fox said.