When it was announced on June 8 that OpenSSL was vulnerable to a dangerous new attack that could reveal security certificates to an attacker, the Internet spent a few days in panic mode. Thousands, if not millions, of sites used (and still use) OpenSSL, and the fix for the problem took a few days to arrive.
The OpenSSL team became subject to all manner of suggestions and offers for help, from sources such as the OpenBSD team and from security companies. In the end, the OpenSSL core team expanded, and at least two separate organizations began full-scale security audits of the project. The OpenBSD team even decided to fork and rebuild OpenSSL themselves.
While the entire Heartbleed affair was complex, dangerous, and required both admins and developers to put out a lot of fires, the incident does cast a harsh light on the current state of software development, and upon the fact that so many teams rely on the security and integrity of software they did not write.
There have been many solutions offered from experts and vendors alike, but the most common and traditional security practices in software just might not be enough anymore. Static analysis, for example, failed to detect the bug in OpenSSL, primarily due to the complexity of the code that caused the vulnerability. In many ways, technical debt is just another form of security risk.
Zack Samocha, senior director of product at Coverity, said that his company did not detect the Heartbleed bug because it was such a strange case, hidden in such complex code. But, he added, Coverity 7.5, which arrived in July, is now able to find such security issues when doing static analysis of code. He said that Heartbleed brought a lot of new customers to Coverity.
“We got a lot of questions from our customer base,” said Samocha. “We had a huge amount of growth in Coverity Scan. Heartbleed made many customers very concerned about the supply chain of open source, not just the functionality, but also the security. It doesn’t matter if the code you provide to your customer comes from open source: It’s your product out there. To be able to solve security issues, it’s not just enough to have a security team; you really have to make sure security is a part of every developer’s workflow. We can really help with that.”
Samocha said that security is about being proactive, not just defending the network and reacting to threats. “All of those things are after the fact. They have their issues and they’re not really scalable,” he said. “We believe if you really want to get the security working well, it has to be part of the development workflow. That means if I’m a developer coding my tool, I’m going to scan my code and find the issues even before it’s checked in.”
But adding security to the workflow isn’t the way most IT organizations work. Instead, IT organizations with security teams often rely on auditing to ensure their systems are secure. “If you look into the market, the majority of the market is still doing auditing,” said Samocha.
“But to help all of my development organizations to scale and support the security, this just doesn’t fly. The goal of the security team is to help the development organizations to embed the security life cycle in their development workflow, rather than providing auditing services for those teams.”
Language as the problem
While adding security tools to the workflow is a popular way to embed security into the development process, some developers have been offering even more drastic solutions to software security woes. One of the big ones, which sounds almost like a joke upon first hearing it, is simply that developers shouldn’t use C.
Gary McGraw, CTO of Cigital, said exactly this. “Mostly, what I advocate is getting rid of C as a programming language,” he said. “It’s important to realize some languages are better than others, and some software security techniques are better than others. Heartbleed was a particularly heinous piece of code, even from the perspective of understanding how it works.”
C’s lack of type safety is the primary problem for security when developing in the language. Without proper coding practices, buffer overflows can be quite common unless there are proper measures and tests in place to stop such exploits from making it through production.
McGraw said he’s been advocating for abstinence from C for many years, and that only recently have other developers begun to take the idea seriously.
“I went to Bell Labs where I gave a talk in the year 2000,” he said. “During the talk, I would always get the audience to chant along ‘C is bad!’ I was sitting there with Dennis Ritchie, who invented C, in the front row. It took a while before he nodded his head and everyone started chanting. It’s an issue that really science researchers and academics and programming languages people have paid more attention to than development managers. It’s high time development managers think about programing language choices.”
McGraw said that C can be avoided in modern environments, and particularly on the Web. “The main thing is that if there’s a modern approach, with a modern language, and the trade-offs make sense, you should adopt that. With great power comes great responsibility; in some cases you do have to use C or you might even have to use assembly, but not in most cases,” he said.
Language as the answer
If choosing a new language is the approach your team favors, there are a lot of options. Ada, in particular, is designed to minimize security and fault risks, and as such is an example of a programming language that can help minimize the problems faced when an application makes it out into the wild.
Robert Dewar, Adacore’s CEO and cofounder, said that Ada’s use in mission-critical systems, like avionics controls, has turned it into a language that can be trusted, provided your developers are serious about security when they write their code.
“Part of the problem is just culture,” he said. “A lot of code is written in a ‘throw it together, test, debug and it’s good enough’ fashion. We’ve never been satisfied doing that on avionics code, but that care isn’t applied in other areas where you’d expect it to be applied, like automotive software and medical software.”
While Ada has long been the choice for systems that cannot have any bugs, it’s only recently been seen as a security solution, especially now that embedded control systems on planes and in cars are now more vulnerable to attacks, said Dewar.
“We haven’t lost a life on a commercial airline due to software bugs,” he said. “On the other hand, we have only just started to worry about security in avionics software. Boeing issued a warning on the 737 to protect it from security flaws. It’s certainly true that using a language like C makes life harder. Nobody should be saying it’s impossible to write secure software in C, but it certainly makes life harder.”
Dewar said that the traditional software development discipline of proving an application is bug-free could be a good method of ensuring security for applications. “The air traffic control system for England is mathematically proved to be free of buffer overrun errors or overrun errors,” he said. “They cannot occur because [it is] mathematically proven that they can’t happen.”
Proofs aren’t exactly child’s play, but they can be a good way to reassure yourself and your bosses of the security of the software. “Proof of correctness is a really interesting aspect of [software security],” said Dewar. “It is not a magic bullet: You can have a bad specification and carefully prove your software matches that bad specification. But it’s helpful to prove code is free of buffer overrun errors, which is the most common vulnerability of C code. It’s something that can be avoided.
“Ada is not the only language where it’s just not possible to overrun memory arrays. That’s all checked at runtime. It’s only a partial solution, and it’s not helpful if you’re driving a car and having a message come up saying something’s gone wrong with the software. But with something like Heartbleed, it would be better if that software crashes with an error.”
Enforcing at code time
A few of the more popular solutions for security in the development life cycle are static code analysis tools like Coverity, FindBugs and Veracode. Another approach advocates the use of in-IDE enforcement tools like Klocwork and Cigital’s SecureAssist. But all of these tools have their limitations and strengths.
“Most static analysis tools say I found lots of potential bugs, but I haven’t found all of them—maybe 90% of them,” said Dewar. “That’s not terribly convincing. The immediate question [of] ‘What about the other 10%?’ is a serious one for security applications. These tools that help find bugs are not the answer. You need sound tools that, when they say there’s no possibility, there’s no possibility.”
That’s because one of the traditional problems with static analysis tools is that they don’t catch everything, and that they can throw false positives.
McGraw advocated for enforcing at coding time, and his company’s tool, SecureAssist, does just that. “Klocwork [and tools like it], I think those are fantastic tools. Anything that helps you pay attention to security while you’re coding is good. Coverity and Veracode, those are used at compile or after compile time,” he said.
McGraw advocated for the use of “SecureAssist and other tools that are available in the IDE, and that notice bugs while you’re typing them and say ‘Hey don’t do it that way, do it this way.’ The closer you can get to the developer’s keyboard, the better off we are from a security perspective.”
Art Dahnert, product manager for security at Klocwork, said that enforcing good design and development practices when developers are coding can help keep the security flaws from ever making it into production.
“The whole key is that it’s a process,” he said. “The secure software development life cycle is a process. Developers write code, they check it in, and it gets built into product. Along the way, we’ve developed techniques to make that process faster and less error-prone.
“That’s where we want to jump in and help developers to not implement errors into the code. Within the IDE we may go faster and not find everything the developer is working on because we want to be unobtrusive. But when they check it in and compile, we can check it. Before we’ve gotten into the integration build and the final version, we’ve got development errors for that developer so he can see the problem at check-in time.”
But the key to helping developers when they’re writing code isn’t to lock them out of their favorite IDE features and stop their work every time there’s an issue. Rather, the key, said Dahnert, is to be unobtrusive. “Developers have to be educated. A lot of developers now out there are so focused on schedules and time constraints that they don’t get the actual training they need for security,” he said.
“You’ve got to make it as unobtrusive as possible. It has to be fast, because developers need fast machines. Those two things are the crux of it: Make it fast and get it out of the way. You also have to take advantage of the actual IDE itself: Visual Studio has the little red squiggles, and Klocwork will use that API and that visual reference to say there’s a problem.”
Other solutions
Pushing security responsibility onto your development team is certainly a great way to get them involved in the security process, but there are many other practices that can help to minimize your team’s exposure to risk via software exploitation.
One tactic is to hold core competency in all of the software packages that are critical to your infrastructure. Dave Miller, chief security officer of Covisint, said that his company has been able to build its systems from the ground up since 2000, meaning there’s little legacy software or hardware that needs to be supported. This helps to minimize risk for the OEM supply-chain connection company.
Covisint used OpenSSL when Heartbleed was revealed, and yet the company was completely safe from the exploit. How? Miller said his team understands OpenSSL’s internals and had long ago recompiled the project using only the parts they needed.
“There are two ways you can install OpenSSL: You can take the modules and install and use them, or you can take the object modules and recompile,” he said. “One of the things we did—and this is why Heartbleed did not effect us—the vulnerability was part of this Heartbeat module that was a UDP module. Very few of our systems use UDP, so I didn’t compile it into my OpenSSL implementations. We’ve had the same problems with [Secure FTP]; we recompiled the FTP modules to only allow puts. The only command available was put. You just don’t compile the functionality in, you compile it as small as possible.”
Mathieu Baissac, vice president of product management at Flexera, advocates for another tactic for software developers: staying on top of your updates.
That doesn’t just mean updating servers and your internal software; it means updating your products as soon as possible once they’re in the field. Baissac advocates speed as the solution, rather than simply preparedness on the code side. “There’s always going to be a Heartbleed, especially because it wasn’t even our software,” he said. “No matter how good their coding practice was, there was no way they could solve that themselves.
“When I think of Heartbleed, I don’t think about coding practices, but rather how do you get that fixed as fast as possible. There’s always going to be bugs and security issues. The thing you want to concentrate on is not relying on the end customer to go get their latest patch, because they never will. You’ve got to put things in your product so it is smart enough to say, ‘Do you have new firmware for me?’ ”
Flexera makes software that handles automated updating of OEM and ISV products from the field, enabling your products to automatically update themselves when there is new software available.
While that’s one solution, internally hosted applications still need to be kept up to date so that the services they provide to your customers do not expose them to any undue risk. Axway’s Mark O’Neill, vice president of innovation, advocates for developers to be careful with their APIs and how they implement them.
“In the case of APIs, if you look at the security models around APIs, often a developer is working around API keys,” he said. “It’s up to the developers to manage those keys, make sure they’re not including those keys in the application. Those are gotchas that developers have to be aware of. There have been recent examples where developers have put code into GitHub, but left the API keys in GitHub, leaving them vulnerable to social engineering-type attacks. If you look at the awareness of security around API keys, I don’t think it’s the level of SSL private keys. Everyone knows SSL private keys are sensitive and have to be stored on the file system, but in the case of API keys, that kind of practice can’t happen.”
But no matter what practices you put in place to handle software security, Adacore’s Dewar is still shocked at the lack of software security regulation and awareness out there.
“I don’t know why we put up with software problems the way we do,” he said. “Every week I read some new disaster of a security nature. There’s the Target stuff being taken; the entire British license database was compromised. Every week you can read something like that, and there tends to be an attitude in the press that glitches are inevitable. ‘Glitch’ is a nasty word, and you can’t really stop them, and we have a whole culture of unreliable software.
(Related: Heartbleed’s reign of terror)
“Isn’t it amazing we’re using an OS where, if you click on an attachment in an e-mail you can completely compromise a machine? Yet we blame the people who click the images, but it’s the fault of the OS, and I can’t believe people put up with it. Sometimes you see stories of young hackers being thrown in jail for getting into systems. If I had top security data on my desk and I left my door open and unlocked and someone walked in, there would be two people in trouble. I feel that when it comes to OS security we just don’t hold the producers of that software accountable.”