The IEEE Center for Secure Design has published a report on the Top 10 software security design flaws (and how to avoid them).
When it comes to making sure software is secure, too much of the attention is focused on bugs, and not enough of the conversation is about design flaws, according to Gary McGraw, CTO of security software provider Cigital, one of the founding members of the Center for Secure Design, an organization made up of technology and security companies and researchers.
“Bugs and flaws are two very different types of security defects,” he said. “We believe there has been quite a bit more focus on common bugs than there has been on secure design and the avoidance of flaws, which is worrying since design flaws account for 50% of software security issues.”
(Related: How developers are prioritizing security)
When it comes to cyber attacks, even the biggest companies are susceptible to an attack, and it can be as easy as not validating data or putting sensitive data on a client’s system, thinking attackers won’t find it. In fact, JPMorgan Chase and at least four other U.S. banks were recently involved in a cyber attack reportedly stemming from an employee’s personal computer that was infected with malware.
To help protect software systems, the Center for Secure Design has come up with a list of the Top 10 software security design flaws, and some practices for avoiding them.
1. Incorrect trust assumptions: Don’t assume trust. Authorization, access control, security policy enforcement and embedded sensitive data should never be placed in client software because users and attackers will find them, according to the report. Also, never trust any data sent from clients. Make sure all data received is properly validated.
2. Broken authentication mechanisms: One of the main goals of secure software design is to prevent attackers and even users from gaining access to a system without validating identity, according to the report. The Center for Secure Design recommended having a single authentication mechanism leverage one or more factors of an application’s requirements, and making sure authentication credentials have a limited lifetime, are unforgeable, and are stored on the system.
3. Neglecting to authorize after authentication: Knowing a user’s identity is not enough to decide whether or not they may perform certain actions, according to the report. “Authorization should be conducted as an explicit check, and as necessary even after an initial authentication has been completed,” said the report. “Authorization depends not only on the privileges associated with an authenticated user, but also on the context of the request.”
4. Lack of strict separation between data and control instructions: “Comingling data and control instructions in a single entity, especially a string, can lead to injection vulnerabilities,” according to the report. “Lack of strict separation between data and code often leads to untrusted data controlling the execution flow of a software system.”
5. Not explicitly validating all data: Software systems and components often make assumptions about data they report on, and if designers don’t explicitly make sure that such assumptions hold, vulnerabilities will appear, according to the Center. “As such, it is important to design software systems to ensure that comprehensive data validation actually takes place and that all assumptions about data have been validated when they are used.”
6. Misuse of cryptography: Common pitfalls of cryptography include rolling your own cryptographic algorithms or implementations, misusing libraries and algorithms, poor key management, randomness that is not random, failure to centralize cryptography, and failure to allow for algorithm adaptation and evolution, according to the report. To avoid these pitfalls, the center recommends always working with an expert if possible to ensure developers aren’t making decisions on algorithms and cipher modes.
7. Failure to identify sensitive data and how they should be handled: To make sure data is handled properly, designers need to identify different levels of data classification and factor all relevant considerations into the design, according to the report. Not all data protection requirements are the same, and designers need to identify all the ways that data could be exposed or manipulated, the report said.
8. Failure to consider the user: Software systems interact with humans in one way or another, which is why all relevant stakeholders should always be considered, according to the report. “The security stance of a software system is inextricably linked to what its users do with it. It is therefore very important that all security-related mechanisms are designed in a manner that makes it easy to deploy, configure, use and update the system securely,” the Center wrote.
9. Failing to anticipate how integrating external components can open a vulnerability: Designers and security architects have to make sure time is allocated to consider how an external component is going to impact a system, and assume that all external components are untrustworthy until proven otherwise, according to the report.
10. Brittleness in the face of future changes: Software security must be designed for change, according to the report. The Center recommended designing for security updates; for security properties changing over time (for example: when code is updated); designing with the ability to isolate or toggle functionality—making it possible to turn off compromised parts and turn on performance-affecting mitigations if needed; for changes to things intended to be kept secret such as encryption keys and passwords; for changes in security properties of components beyond your control; and for changes to entitlements (for example: when a staffer leaves the organization or when job functions change).
“The reason why design flaws don’t get as much attention as software bugs is because they are hard to deal with,” said Cigital’s McGraw. “It’s much easier to solve the bug problem because we have technology to help us solve it. There is no such thing at the design level, but that doesn’t mean they aren’t as important.”
The design flaws were determined through a workshop consisting of the founding members of IEEE’s Center for Secure Design. Founding members include professionals from Athens University of Economics and Business, Cigital, EMC, George Washington University, Google, Harvard University, HP, McAfee, RSA, the Sadosky Foundation, and the University of Washington.
The full report is available here.