Open-source software forms the backbone of most modern applications. According to the 2018 Black Duck by Synopsys Open Source Security and Risk Analysis Report, 96 percent of the 1,100 commercial applications that the company audited for the survey contained open-source components, with each application containing an average of 257 open-source components.
In addition, on average, 57 percent of an application’s code was open-source.
In terms of security, open-source is no more or less secure than custom code, the report claimed. Because open-source is widely used in both commercial and internal applications, it is attractive to attackers since they will have a “target-rich environment when vulnerabilities are disclosed,” the report stated.
According to Rami Sass, co-founder and CEO of WhiteSource, a company that specializes in open-source security, thousands of open-source vulnerabilities are discovered annually. The most notorious one in the last year may be the vulnerability in the open-source web application framework Apache Struts 2, which was a contributing factor to the Equifax breach last year, and has since been heavily reported in the media, mainly due to the massive scale of that breach.
From a code perspective, 78 percent of the codebases looked at in the Black Duck by Synopsys report had at least one vulnerability caused by an open-source component, and the average number of vulnerabilities in a codebase was 64.
Knowing what components are in use in your system is key
According to Tim Mackey, senior technical evangelist at Synopsys, organizations should have a governance rule in place to define what the acceptable risks are when using open-source components. That rule should encompass everything from open-source license compliance to how the organization intends to manage and run systems that are dependent upon a certain component.
“The reason that that upper-level governance needs to exist is that it then provides the developers with a template for what is and what is not an acceptable path forward and provides that template to the IT operations team,” he said.
That governance rule also needs to specify associated timelines that are expected by the organization for things such as patching activities or triage of a new issue.
Once that policy is in place, it is the responsibility of the product owner to ensure that there is an appropriate review process, both at the time that a component is added and on an ongoing basis, Mackey explained. Once a component has been ingested, there’s a pretty good chance that it will stay in that application for a long time, he said. He advised that product owners ensure that within each development sprint, there is someone that is performing some amount of library hygiene.
In Black Duck by Synopsys’ report, it found that on average, identified vulnerabilities were known for almost six years. Heartbleed, Logjam, Freak, Drown, and Poodle were among these vulnerabilities.
In a report titled “Hearts Continue to Bleed: Heartbleed One Year Later” by security company Venafi Labs, it was revealed that as of April 2015, exactly one year after the disclosure of Heartbleed, 74 percent of Global 2000 companies with public-facing systems still contained that vulnerability. Heartbleed is a vulnerability in the OpenSSL cryptography library that allows an attacker to extract data that includes SSL/TLS keys from the target without ever being detected, Venafi Labs explained in their report.
Similarly, attackers targeting Equifax were able to expose the information of hundreds of millions of users between May and July 2017, despite the fact that the Apache Foundation disclosed the Struts 2 vulnerability a few months earlier in March. “At that time, companies like Equifax should have checked for the presence of the vulnerability in their codebase. Only that in order to check for the vulnerable version, Equifax and tens of thousands of other companies using the Struts 2 framework would have needed to know that they were using that component. They would have needed to know of its existence in their product,” WhiteSource wrote in a blog post following that breach.
The key thing that organizations must do is actually know what components are in use in their systems. Once they know that information, they can determine if any of those components are vulnerable and take action to remediate that, Sass from WhiteSource explained.
“That’s something that requires attention to work, and to be honest, I don’t think it’s feasible for practically any size of organization to be able to manually discover or figure out all of their open-source inventory and then in that inventory, find the ones that are vulnerable. I’ve never seen anyone do that successfully without using an automated tool,” Sass said. Organizations should look for tools that will automate the visibility and transparency of open-source components that are being used. Those tools should also include alerting for components that do have vulnerabilities.
Another challenge organizations face when managing open-source projects in their environment is that open-source works differently from proprietary software in that you can’t just go and get a patch from a source, explained Mackey from Synopsys. “You have to first figure out where you got your code from in the first place because that’s where your patch needs to come from.”
For example, a company could get a patch of Apache Struts from where the code is actually being developed, or they could get a patch from a company that has its own distribution of it. Those patches may all be a little bit different, so if your organization does not know where it came from in the first place, you might be changing behavior by patching, Mackey explained.
Ayal Tirosh, senior principal analyst at Gartner, recommends having a system in place that can alert you when there is an issue. A bill of materials is a great proactive approach, but in terms of governance, it only identifies the issues that were identified at the time it was created. A few months down the road, when a new vulnerability is revealed, it would be helpful to have a mechanism in place that will alert the organization about these issues as they arise, Tirosh explained.
“There are a class of tools — and there are some open-source solutions that do the same thing — that’ll essentially run through and fingerprint these different components and then provide you information from a variety of sources, usually at the very least public sources, sometimes proprietary databases, that’ll provide you that bill of materials saying these are the components that you have and these are the known vulnerabilities associated with those,” Tirosh explained.
Those tools might also alert you if there is a new version available as well, though Tirosh cautions that a new version isn’t necessarily more secure. It might still suffer from the same issues as previous versions, or it might have a completely different issue, he explained.
Not all vulnerabilities are a concern
Fortunately, organizations may be able to save some time and resources by prioritizing effectively. According to WhiteSource’s The State of Open Source Vulnerabilities Management report, 72 percent of the vulnerabilities found in the 2,000 Java applications they tested were deemed ineffective. According to the report, vulnerabilities are ineffective if the proprietary code does not make calls to the vulnerable functionality.
“A vulnerable functionality does not necessarily make a project vulnerable, since the proprietary code may not be making calls to that functionality,” WhiteSource’s report stated. By determining what the actual risk a vulnerability may pose, organizations can save security and development teams precious time.
“If you’re able to map out how you’re using vulnerable components, you can save yourself a lot of work that’s not necessary and on the other hand make sure that you are protected in the places that are actual exposures,” said Sass.
While you can’t completely ignore those vulnerabilities, your organization can prioritize its security efforts based on importance. Maya Kaczorowski, product manager at Google Cloud, said that the amount an organization should care about those components will depend on their specific infrastructure.
“If it’s still possible for someone through another vulnerability to gain access to that vulnerability, then that could be a concern,” said Kaczorowski. “There’s been a number of attacks where we’ve seen attackers chain together multiple vulnerabilities to get access to something. And so having one vulnerability might not actually expose you to anything, but having two or three where together they can have a higher impact would be a concerning situation. So the best practice would still be to patch that or remove it from your infrastructure if you don’t need it.”
Treat open-source, proprietary code the same
Sass believes that governance over open-source code should be as strict as governance over proprietary code. Mackey agrees that “they should treat any code that’s coming into their organization as if it was owned by themselves.”
“I always like to put on the lens of the customers and the users,” said Kathy Wang, director of security at GitLab. In the end, if a user is affected by an attack, they aren’t going to care whether it was a result of a a smaller project being integrated with higher or lower standards. “So in the end, I think we have to apply the same kind of standards across all source code bases regardless of whether it was an open-source or proprietary part of our source code base.”
Gartner’s Tirosh believes that conceptually, neither one is riskier than the other. “Both of them can introduce serious issues. Both of them need to be assessed and that risk needs to be controlled in an intelligent fashion. If you look at the statistics and you find that the majority of the code is open-source code, then that alone would say that this demands some attention, at least equal to what they’re doing with the custom code. And for some organizations that number’s gonna vary.”
Tsvi Korren, chief solutions architect at Aqua Security, also agrees that the security standards need to be the same, regardless of whether an organization wrote its own code or is using some open-source component. “It doesn’t really matter if you write your own code or you get pre-compiled components or code snippets or just libraries, they really should all adhere to the same security practices,” he said.
He added that the same goes for commercial software. Just because you’re getting code from a well-known company doesn’t mean that you don’t need to do vulnerability testing against it, he explained. All code coming in should be tested according to your organization’s standards, no matter how reputable of a source it is coming from.
If you have access to the source code, the code should go through source-code analysis. If you do not have access to that source code, it still needs to go through a review of the known vulnerabilities that are assigned to that package, Korren explained.
Once the application is built, it should also go through penetration testing, Korren said. “There are layers and layers of security. You can’t execute everything, everywhere. You don’t sometimes have access to the raw source code of everything like you have for your own software, but you do what you can do so that at the end of the development cycle, when you have an application that has some custom-made code, some open-source code, and some commercial packages, you come out the other side certifying that it does meet your security standards.”
Governance needs to be a team effort
Ultimately, the security officer is going to be the one held responsible if anything bad happens, but securing software should really be a team effort.
Wang said that it’s important to empower developers to raise the bar on how they develop securely. But, security teams are also responsible for ensuring that whatever product comes out of the development pipeline was created in a way that is consistent and secure, she explained. Often what happens in many organizations, however, is that security teams are siloed and don’t get brought into the development process until a project is almost complete and ready to be released. Unfortunately, the later security comes in, the more expensive it becomes to fix the problem, she explained.
“What we’re trying to do is attack this from two different directions. One is to get security looped in,” Wang said. “That means the security team needs to be a very good collaborative player across all of engineering, so that there’s a good working relationship there, but the development team also needs to be empowered to do their own checks early in the process so that it’s not all being left for the end of the process.”
Wang believes that the industry is beginning to start moving in that direction. “A lot of the very large enterprise companies are trying to move in that direction, but it takes longer for them to change their existing processes so that people can start following that.” She believes that cloud-native companies may be moving faster towards that goal because they are generally more agile and have more tools and processes in place to achieve that goal.
Aqua Security’s Tsvi Korren agrees that in the event of a major breach or security event, the security person is going to be the one that gets blamed.
“The overall accountability lies with security,” said Aqua Security’s Korren.
Different strategies for security
There are several different strategies that security teams can take in terms of securing software with open-source components, he explained.
One strategy is to have security teams do everything themselves. This means they will likely have to build a developer-like personality so they can review and understand source code. A lot of times organizations don’t have that, so they leverage things such as vulnerability assessment tools or they can embed security leads inside application teams, Korren explained.
This isn’t without its challenges, especially when development and security teams don’t communicate. “The security people need to have a little bit of understanding of application design and methodology, while you need that developer to understand what that security information is,” explained Korren.
Korren believes that security is shifting left as well. “So while the accountability for security ultimately lies with security [teams], the execution of that is now getting closer and closer to development.”
Compared to developers, IT operations will be subject to greater regulatory scrutiny, said Synopsys’s Mackey. “They’re going to, out of necessity, likely trust that the developers have done their job and that the vendors have done their job. But they need to be in a position where they can actually verify and vet that.”
If there is a vulnerable component that has not been patched, IT operations is ultimately going to have to deal with whatever attacks are mounted against that component, as well as deal with the cleanup after a breach or other malicious activity, Mackey explained.
Small vs. large open-source projects
As mentioned in Black Duck by Synopsys’s report, larger projects are a more attractive target for attackers because they are present in so many different projects. But those larger projects may be more secure for that same reason.
“I think inherently, large open-source projects tend to actually have fairly high levels of security in that there are lots of people who care about it and look for that,” said Google’s Kaczorowski. Larger projects tend to have a huge amount of uptake and use in real organizations, she explained, so there are a lot of eyes on a problem and more people finding vulnerabilities. Smaller projects might be full of vulnerabilities, but because they are small and not widely used, the chances of those vulnerabilities being exploited are equally small.
According to Wang, besides the security risk, there is also a risk associated with continuity of support with smaller projects. From the security side, a smaller project may mean that you can do audits in a potentially less complex way because you’re looking at fewer lines of code. “But the decision about whether to integrate a smaller project is more than just about the security posture, it’s about the continuity of being able to continue that integration over many years as well,” Wang said.
Sass believes that if all things are equal and you have the option to choose between a popular, well-maintained project and a smaller, unmaintained project, you should go with the more popular one. “But very often there aren’t many alternatives and if you’re looking for a very specific type of functionality to put into your code and you want to save yourself the trouble of building from scratch, then your only options will be lesser-known projects,” he said.
Some open-source projects even have bug bounty programs to mitigate their security, explained Kaczorowski. For example, GitLab relies on external developers to contribute, and they also utilize a program called HackerOne, Wang explained. HackerOne is a program where hackers can submit vulnerabilities they have discovered. GitLab then triages those findings, validates them, and then rewards bounties for findings, she said. Smaller projects might not have the resources to take advantage of programs such as this.
There are also human risks to using open-source projects. Even if you verify that a project has enough maintainers, you won’t necessarily be able to tell if those maintainers have two-factor authentication accounts. “It might actually be very easy for someone to inject new code into that project,” said Kaczorowski.
Kaczorowski explained that you should probably also look to see if the project has a public disclosure policy. In other words, what action they take if somebody comes to them with a newly discovered vulnerability. While not all large projects will have a good disclosure policy, it may be that larger projects with more maintainers will be able to respond effectively, while a project owned and maintained by only a few people might not have the manpower to put a process like that in place.
Moving to a more secure way of incorporating open-source
According to Gartner’s Tirosh, the appearance of large vulnerabilities in open-source components is not scaring organizations away from using open-source, but rather is making them put more of a focus on security. “I think it’s often times hard for those security issues to drive folks away from the value of what they’re getting out of these open-source components,” he said.
He explained that, in his experience, the number of conversations he’s had around tools that can identify components and alert organizations on issues has increased quite a bit in the last year, coinciding with some of the major issues that have occured. He also attributed the increased interest to the fact that organizations are starting to get better in terms of identifying vulnerabilities. “They’re recognizing that the question isn’t just the internally developed stuff, but also open-source code. I would say that the awareness of the issue and the desire to address it in one way, shape, or form has risen and so the security part of the equation is now more prominent.”