For all the talk of server and network security, the fact remains that applications are among the main attack vectors leveraged by bad actors.
This is so because development teams are focused on delivering new functionality and features as quickly as possible. They are not usually trained in security practices, and often have little desire to do so.
Meanwhile, that can leave modern applications – which are more likely to be assembled from open-source and third-party components, and tied together with APIs and other connectors – vulnerable to intrusion.
Development today is driven by short-term benefits, but faces long-term risk, according to Jonathan Knudsen, the head of global research in the Synopsys Software Integrity Group’s Cybersecurity Research Center. “You’re trying to make something that works as fast as you can, and that means that you’re not necessarily thinking about how somebody could misuse the thing” down the road, Knudsen said. “The short-term benefit is you build something that works, that’s useful, that people will pay for and you make money. And the long-term thing is, if you don’t build it carefully, and if you don’t think about security all along the way, something bad is going to happen. But it’s not so immediate, so you get caught up in the immediacy of making something that works.”
According to Knudsen, there are three kinds of software vulnerabilities: design vulnerabilities, configuration vulnerabilities and code vulnerabilities. “Developers are making the code vulnerability mistakes, or somebody who developed an open source package that you’re using. Design time vulnerabilities are, before you write code, you’re thinking about the application or an application feature, and you’re figuring out how it should work and what the requirements are and so on and so forth. And if you don’t do the design carefully you can make something that even if the developers implement it perfectly, it’ll still be wrong because it’s got a design flaw.”
Knudsen explained a number of factors behind these vulnerabilities. First is the use of open-source components. A Synopsys report from earlier this year found that 88% of organizations do not keep up with open-source updates. “If I choose to use this open source component, how risky is it?,” he said. “There are many things to look at, like, how many people are already using that thing? Because the more it’s used, the more it gets exercised, the more the bad stuff shakes out before you get to it, hopefully.”
Another thing to look at is the team behind that component, he added. “Who is the development team behind it? You know, who are these people? Are they full time? Are they volunteers? How active are they? Did they last update this thing eight months ago, two years ago? Those are just sort of operational concerns. But then, if you are going to get more specific, you’d ask, did the development team ever run any security test tools on it? Have they even thought about security?”
This, he pointed out, is largely impractical for a development team to research, because they just need a component with a particular function, and want to grab it and drop it into the application and start using it. Knudsen added that there are a number of efforts underway on how to score open-source projects based on risk, “but nobody’s come up with a magic formula.”
The need for speed in application development and delivery had led to the “shift left” movement, as organizations try to bring things like testing and security earlier in the life cycle, so those tasks aren’t left to the end, where it can slow down release of new functionality. That means that more of those efforts are being put on developers. As Knudsen explained, “One of the things is this focus on the developer, because everybody thinks, ‘Okay, developers write code, and code can have mistakes or vulnerabilities in it.'”
But, he noted, it’s not really all about the developers; it’s also the process around them. ‘When you create software, you start out, you design it. You’re not writing any code, you’re just thinking about what it should do. And then, you write it, and you test it, and you deploy it or release it or whatever. And the developers are really only one part of that. And so you can help developers make fewer mistakes by giving them training and helping them understand security and the issues. But it shouldn’t be on them. Developers are fundamentally creative people who solve problems and make things work and, and you should just let them run with that and do that. But if you put them in a process where there’s threat analysis going on, when you design the application, where there’s security testing going on during the testing phase, and, and just feeding back those results to the development team, they will fix the stuff. And you’ll have a better product when you release it.”
To help create an optimal security process for developers, Synopsys offers many application security testing products and tools including industry leading solutions in SAST, DAST, and SCA.” To learn more visit synopsys.com.
Content provided by SD Times and Synopsys