A crystal ball presentation on the future of application security at the Gartner Security and Risk Management Summit this year caught the eye of us in the software security space. In case you missed it, the top-line predictions were:
- By 2022, software composition analysis (SCA) will surpass traditional AST tools (SAST, DAST) as the primary tool in the AppSec toolbelt.
- By 2023, limited security testing capabilities incorporated into the development toolchain will identify more vulnerabilities than dedicated SAST solutions.
- By 2022, 10 percent of coding vulnerabilities identified by static application security testing (SAST) will be remediated automatically with code suggestions applied from automated solutions, up from less than 1 percent today.
RELATED CONTENT: Shifting left for better security? It’s just as important to shift right too
If you’re a provider of software composition analysis solutions, then the first bullet point obviously excites you, but by the same token it also challenges long-held security paradigms. For those of you unfamiliar with SCA, it’s the ability to identify latent vulnerabilities in applications that originate not from the code teams create, but from the code they depend upon. Such dependencies are very common in modern applications due to the proliferation of high-quality open-source components that address common tasks within a given programming language or platform. Since open-source libraries are the foundation for modern application development, it makes sense that tackling any latent unpatched vulnerabilities in them should be a primary task. In other words, if you address any issues in your libraries, you’re free to focus on issues in your custom code. SCA solves the former problem, while traditional tools like SAST solve the latter.
The second bullet is more nuanced. Essentially, it states that traditional SAST solutions will take a back seat to limited testing capabilities baked into IDEs or other areas of the toolchain. While traditional SAST has a reputation for long test times, this is more a function of the depth of analysis in the checkers than the value of the testing. By moving elements of code analysis into IDEs or into functional testing, this allows traditional SAST to focus on the hard task of discovering bugs within the entire application and have the IDE-based solution look for incremental issues. By incrementally checking code within the IDE, fewer defects are logged, which both increases the quality of code being committed while reducing testing costs. The reduction in testing costs for standard security issues then frees QA teams to focus their attention on architectural or system-level issues rather than preventable security issues.
The last bullet is more interesting. Automatic remediation of defects implies that security tools could become code generators. While it’s fairly straightforward to determine whether basic security defects exist within the code, there’s a fairly wide gap between detection and semantically appropriate resolution. Put another way, would you rather trust your engineers to develop a security fix in your business logic, or have a tool provide a standardized fix with generic assumptions?
While I’m not yet comfortable with automated resolution of security issues, I am a huge fan of contextual training. When developers are armed with easy-to-use tools that identify security defects in their IDE and explain precisely why the code fails to meet security targets, we not only are able to address the defect but simultaneously help prevent future occurrences. In the end, security tooling should focus on enabling developers to create more secure code prior to it ever being available to customers.
At a high level, the three predictions from Gartner boil down to providing development teams with the security tools they need, when they need them, without becoming roadblocks to development. When security tools become an impediment to development, engineers under tight deadlines will find ways to bypass them. This is why context-sensitive security information, both detection and remediation guidance, belongs in an IDE. It’s also why having transparent security testing operating in parallel to existing functional testing is a hot topic, and why continuous monitoring for new security disclosures within dependencies is crucial for released code. Each provides contextually valuable security information enabling the delivery of higher quality code – and who wouldn’t want that!