With more development teams today using open-source and third-party components to build out their applications, the biggest area of concern for security teams has become the API. This is where vulnerabilities are likely to arise, as keeping on top of updating those interfaces has lagged.

In a recent survey, the research firm Forrester asked security decision makers in which phase of the application lifecycle did they plan to adopt the following technologies.  Static application security testing (SAST) was at 34%, software composition analysis (SCA) was 37%, dynamic application security testing (DAST) was 50% and interactive application security testing (IAST) was at 40%. Janet Worthington, a senior analyst at Forrester advising security and risk professionals, said the number of people planning to adopt SAST was low because it’s already well-known and people have already implemented the practice and tools.

One of the drivers for that adoption was the awakening created by the log4j vulnerability, where, she said, developers using open source understand direct dependencies but might not consider dependencies of dependencies.

Open source and SCA

According to Forrester research, 53% of breaches from external attacks are attributed to the application and the application layer. Worthington explained that while organizations are implementing SAST, DAST and SCA, they are not implementing it for all of their applications. “When we look at the different tools like SAST and SCA, for example, we’re seeing more people actually running software composition analysis on their customer-facing applications,” she said. “And SAST is getting there as well, but almost 75% of the respondents who we asked are running SCA on all of their external-facing applications, and that, if you can believe it, is much larger than web application firewalls, and WAFs are actually there to protect all your customer-facing applications. Less than 40% of the respondents will say they cover all their applications.”

Worthington went on to say that more organizations are seeing the need for software composition analysis because of those breaches, but added that a problem with security testing today is that some of the older tools make it harder to integrate early on in the development life cycle. That is when developers are writing their code, committing code in the CI/CD pipeline, and on merge requests. “The reason we’re seeing more SCA and SAST tools there is because developers get that immediate feedback of, hey, there’s something up with the code that you just checked in. It’s still going to be in the context of what they’re thinking about before they move on to the next sprint. And it’s the best place to kind of give them that feedback.”

RELATED CONTENT: A guide to security testing tools

The best tools, she said, are not only doing that, but they’re providing very good remediation guidance. “What I mean by that is, they’re providing code examples, to say, ‘Hey, somebody found something similar to what you’re trying to do. Want to fix it this way?'”

Rob Cuddy, customer experience executive at HCL Software, said the company is seeing an uptick in remediation. Engineers, he said, say, “’I can find stuff really well, but I don’t know how to fix it. So help me do that.’ Auto remediation, I think, is going to be something that continues to grow.”

Securing APIs

When asked what the respondents were planning to use during the development phase, Worthington said, 50% said they are planning to implement DAST in development. “Five years ago you wouldn’t have seen that, and what this really calls attention to is API security,” Worthington said. “[That is] something everyone is trying to get a handle on in terms of what APIs they have, the inventory, what APIs are governed, and what APIs are secured in production.”

And now, she added, people are putting more emphasis on trying to understand what APIs they have, and what vulnerabilities may exist in them, during the pre-release phase or prior to production. DAST in development signals an API security approach, she said, because “as you’re developing, you develop the APIs first before you develop your web application.” Forrester, she said, is seeing that as an indicator of companies embracing DevSecOps, and that they are looking to test those APIs early in the development cycle.

API security also has a part in software supply chain security, with IAST playing a growing role, and encompassing parts of SCA as well, according to Colin Bell, AppScan CTO at HCL Software. “Supply chain is more a process than it is necessarily any feature of a product,” Bell said. “Products feed into that. So SAST and DAST and IAST all feed into the software supply chain, but bringing that together is something that we’re working on, and maybe even looking at partners to help.”

Forrester’s Worthington explained that DAST really is black box testing, meaning it doesn’t have any insights into the application. “You typically have to have a running version of your web application up, and it’s sending HTTP requests to try and simulate an attacker,” she said. “Now we’re seeing more developer-focused test tools that don’t actually need to hit the web application, they can hit the APIs. And that’s now where you’re going to secure things – at the API level.”

The way this works, she said, is you use your own functional tests that you use for QA, like smoke tests and automated functional tests. And what IAST does is it watches everything that the application is doing and tries to figure out if there are any vulnerable code paths.

Introducing AI into security

Cuddy and Bell both said they are seeing more organizations building AI and machine learning into their offerings, particularly in the areas of cloud security, governance and risk management.

Historically, organizations have operated with a level of what is acceptable risk and what is not, and have understood their threshold. Yet cybersecurity has changed that dramatically, such as when a zero-day event occurs but organizations haven’t been able to assess that risk before. 

“The best example we’ve had recently of this is what happened with the log4j scenario, where all of a sudden, something that people had been using for a decade, that was completely benign, we found one use case that suddenly means we can get remote code execution and take over,” Cuddy said. “So how do you assess that kind of risk? If you’re primarily basing risk on an insurance threshold or a cost metric, you may be in a little bit of trouble, because things that today are under that threshold that you think are not a problem could suddenly turn into one a year later.”

That, he said, is where machine learning and AI come in, with the ability to run thousands – if not millions – of scenarios to see if something within the application can be exploited in a particular fashion. And Cuddy pointed out that as most organizations are using AI to prevent attacks, there are unethical people using AI to find vulnerabilities to exploit. 

He predicted that five or 10 years down the road, you will ask AI to generate an application according to the data input and prompts it is given.  And the AI will write code, but it’ll be the most efficient, machine-to-machine code that humans might not even understand, he noted. 

That will turn around the need for developers. But it comes back to the question of how far out is that going to happen. “Then,” Bell said, “it becomes much more important to worry about, and testing now becomes more important. And we’ll probably move more towards the traditional testing of the finished product and black box testing, as opposed to testing the code, because what’s the point of testing the code when we can’t read the code? It becomes a very different approach.”

Governance, risk and compliance

Cuddy said HCL is seeing the roles of governance, risk and compliance coming together, where in a lot of organizations, those tend to be three different disciplines. And there’s a push for having them work together and connect seamlessly. “And we see that showing up in the regulations themselves,” he said. 

“Things like NYDFS [New York Department of Financial Services] regulation is one of my favorite examples of this,” he continued. “Years ago, they would say things like you have to have a robust application security program, and we’d all scratch our heads trying to figure out what robust meant. Now, when you go and look, you have a very detailed listing of all of the different aspects that you now have to comply with. And those are audited every year. And you have to have people dedicated to that responsibility. So we’re seeing the regulations are now catching up with that, and making the specificity drive the conversation forward.”

The cost of cybersecurity

The cost of cybersecurity attacks continues to climb as organizations fail to implement safeguards necessary to defend against ransomware attacks. Cuddy discussed the costs of implementing security versus the cost of paying a ransom.

“A year ago, there were probably a lot more of the hey, you know, look at the level, pay the ransom, it’s easier,” he said. But, even if organizations pay the ransom, Cuddy said “there’s no guarantee that if we pay the ransom, we’re going to get a key that actually works, that’s going to decrypt everything.”

But cyber insurance companies have been paying out huge sums and are now requiring organizations to do their own due diligence, and are raising the bar on what you need to do to remain insured. “They have gotten smart and they’ve realized ‘Hey, we’re paying out an awful lot in these ransomware things. So you better have some due diligence.’ And so what’s happening now is they are raising the bar on what’s going to happen to you to stay insured.”

“MGM could tell you their horror stories of being down and literally having everything down – every slot machine, every ATM machine, every cash register,” Cuddy said. And again, there’s no guarantee that if you pay off the ransom, that you’re going to be fine. “In fact,” he added, “I would argue you’re likely to be attacked again, by the same group. Because now they’ll just go somewhere else and ransom something else. So I think the cost of not doing it is worse than the cost of implementing good security practices and good measures to be able to deal with that.” 

When applications are used in unexpected ways

Software testers repeatedly say it’s impossible to test for ways people might use an application that is not intended. How can you defend against something that you haven’t even thought of?

Rob Cuddy, customer experience executive at HCL Software, tells of how he learned of the log4j vulnerability.

“Honestly, I found out about it through Minecraft, that my son was playing Minecraft that day. And I immediately ran up into his room, and I’m like, ‘Hey, are you seeing any bizarre things coming through in the chat here that look like weird textures that don’t make any sense?’ So who would have anticipated that?”

Cuddy also related a story from earlier in his career about unintended use and how it was dealt with and how organizations harden against that.

“There is always going to be that edge case that your average developer didn’t think about,” he began. “Earlier in my career, doing finite element modeling, I was using a three-dimensional tool, and I was playing around in it one day, and you could make a join of two planes together with a fillet. And I had asked for a radius on that. Well, I didn’t know any better. So I started using just typical numbers, right? 0, 180, 90, whatever. One of them, I believe it was 90 degrees, caused the software to crash, the window just completely disappeared, everything died.

“So I filed a ticket on it, thinking our software shouldn’t do that. Couple of days later, I get a much more senior gentleman running into my office going, ‘Did you file this? What the heck is wrong with you? Like this is a mathematical impossibility. There’s no such thing as a 90-degree fillet radius.’ But my argument to him was it shouldn’t crash. Long story short, I talk with his manager, and it’s basically yes, software shouldn’t crash, we need to go fix this. So that senior guy never thought that a young, inexperienced, just fresh out of college guy would come in and misuse the software in a way that was mathematically impossible. So he never accounted for it. So there was nothing to fix. But one day, it happened, right. That’s what’s going on in security, somebody’s going to attack in a way that we have no idea of, and it’s going to happen. And can we respond at that point?”