Programmers err as much as any of us — between 15 and 50 errors per 1,000 lines of code to be more exact. QA tests for these bugs, attempting to ensure that releases are as bug-free as possible. Customers who trust their operations to software won’t tolerate poorly written code, and teams go out of their way to ensure that programs work the way they are supposed to.

But there’s another type of bug that often doesn’t get the same attention — the security bug. Those bugs generally don’t affect performance, at least right away, so teams tend to deprioritize them in favor of fixing functional bugs. But in reality, security bugs are no different, and will eventually cause an app to do something unintended.

In fact, they have the potential to be far worse. A button that doesn’t respond properly (functional bug) is inconvenient and annoying, and can drive employees using an application batty. But a hackable module in the software (security bug) could give hackers the keys to the corporate kingdom, providing them with access to employee accounts, data, and even money.

Part of the challenge is the fact that security is never called out explicitly as an outcome. It’s very often that companies do not have a rules book, best practices or policies developers could follow. Security is expected, but very rarely asked for. The main difference between security bugs and functional bugs, is that for the latter very often there are policies in place and is an explicit requirement for development. Security is never explicitly asked for, but expected to be there when the app is ready for production deployment.

If averting security bugs hasn’t been a priority for programmers, it certainly should be, and customers should demand testing for bugs as part of quality control. In today’s programming environment, that is likely to mean utilizing automated tools that will test for security issues as program modules are developed — just as they are used for functional bug testing.

With much development done in the cloud, for example, teams can use cloud-based tools, to determine if they have been following best security practices. OWASP (the Open Web Application Security Project) provides a long list of automated security testing tools that can help developers detect vulnerabilities.

For maximum security testing, teams can use newly-developed tools that check code against security scenarios as it is written and uploaded to repositories. Such tools can catch problems in specific modules before a security bug gets “buried” in the full application made up of dozens of modules —only to surface when a hacker gets wise to the bug and starts taking advantage of it. If developers “shift left” enough, seeking to resolve security issues as early as possible in the development process, we can squash security bugs as successfully as we do functional bugs.

In conclusion, when shifting left the work, make sure to call out “better security” as something developers are expected to deliver, rather than just hoping for it.

For more information, go to www.hcltechsw.com/wps/portal/products/appscan/home

 

Content provided by SD Times and HCL Software