Cope said another big problem facing development organizations is that as new developers come on board, they do not understand what weaknesses in code look like. “There’s always new developers coming on board, so the guy who got burned by [a hack] a few years ago and learned to avoid SQL injection issues going forward or buffer overflows, he’s maybe taking care of his code,” he said.

“But the guy out of school, or the person who just switched and is now writing code in a new language and doesn’t understand what that weakness looks like, is now introducing them into the code again. It’s not the same guy failing 15 years in a row, it’s a new crop of developers that are re-inventing those failures.”

More than not understanding code they didn’t write, developers coming in to an ongoing project need to understand what Gemalto product manager Todd Steel called the boundaries of the operating environment. Developer security testing has been “more about data object analysis and how to solve problems rather than understanding the system architecture and the limits of that system architecture.”

But Steel added that he believes strongly that “It is the developers’ responsibility to understand the environment and the vulnerability.”

Traditionally, developers write features and perform functional and regression tests, and then builds are made to ensure nothing is broken. Then, it’s up to the QA team to make sure it works properly. Cope said security testing comes at software from a different perspective. He said it’s the job of the security practitioner to look at how the software can be broken, not that it’s running properly. And these are skills developers often aren’t taught and don’t possess.

Steven said there will always be problems with the hygiene of a codebase. “We, the tester, we’re kind of like the dentist,” he said. “People go to the dentist and he asks, ‘Did you brush your teeth after every meal?’ And we’re like, ‘No, I had a business lunch and couldn’t.’ Hygiene is impossible. Asking developers to be hygienic as they write every line of code is impossible.”

Time for a new approach
Since securing software has failed in so many high-profile and damaging cases, the experts we spoke to for this article claim it’s time to reassess the way software is protected, through the use of automated tooling, or even the creation of a way for applications to protect themselves from hacks.

“There are things that are descriptive and discretionary, like password storage, which we know how to do right but is too hard for the garden-variety developer to get right,” Steven explained. “We can show them how to do it once. In an enterprise we can code it up for them once, and then they can use it and forget that they ever had that problem again…this notion of a security API, right?

“And then there are these hygienic problems,” he continued, “like the vulnerabilities that cause the injection problems we discussed earlier, that there’s no amount of testing or no single control that’s going to solve the problem. So we need to have the equivalent of the hygiene program, which is different libraries and frameworks. You use different tools to build the software that is hardened against those types of vulnerabilities so that it’s not possible for the developer to fall into bad hygiene or lack of hygiene that creates the vulnerabilities.”

Cope said you cannot expect developers to get security right by hand. “It’s just not going to happen,” he said. “This is just tedious overhead, as far as they’re concerned.

“So you’ve got to have some process in place as part of the development life cycle to make sure that it’s not just another thing you have to check off the box, but it’s a really important, critical part of the process. Just like you’d write code in an agile fashion and you would always have tests to prove that it works, your unit tests and functional tests and whatnot, I think security tests have to be elevated to that same level of first-class citizen.