So you ran the final build on that new Web-based order and inventory program and handed it over to the QA team. The tests look pretty good and you get the go ahead to deploy. Now all you have to worry about is the marketing department hounding you to create a smartphone app and maybe something for Facebook, and every time you go to a meeting with management, someone asks what you’re doing about that cloud thing.

Your deadlines are always looming, there are always a few pesky bugs still in the code, and you could really use a few more people on your team. But you roll up your sleeves and get started on the next project.

Two days later, your network crashes, the entire database has been trashed, and everyone starts running around frantically trying to figure out what happened. Was it the firewall? The server? Some rogue virus? In the back of your mind you wonder if it might even be your new application. Eventually, the network guys figure out that it was a SQL injection attack. That’s when management starts asking, “Don’t we have tons of security? How could something like this happen?”

Well, the odds are very good the problem was in your code.

Depending on whose numbers you believe, 60% to 90% of all security attacks come in through websites, and a good proportion of them are SQL injection attacks because they are remarkably easy to launch—if your code isn’t written correctly. (For a good article on how SQL injection attacks work and how to prevent them in your code, check out Colin Angus Mackay’s article “SQL Injection Attacks and Some Tips on How to Prevent Them.”) {http://www.codeproject.com/KB/database/SqlInjectionAttacks.aspx}

The growing list of security issues is long and seemingly insurmountable. Attacks can target programs, data, websites, cloud-based applications, even computer-controlled machinery. They can be viruses, worms, Trojans, denial-of-service attacks, SQL injection attacks, or highly sophisticated multi-pronged attacks. They can come in through poorly deployed firewalls, stolen or lost laptops or smartphones, sloppy programming practices—let’s say that again: sloppy programming practices—wireless networks, infected memory sticks, public-facing websites, easy-to-guess passwords, insecure APIs, outdated programming frameworks, operating systems, third-party software or components, infected media files, or even an unlocked window in your data center.

Attackers include everything from lone hackers, groups of hackers, disgruntled employees, simple employee mistakes, industrial espionage, cyberterrorists, even governments. (And just because you don’t work for the military doesn’t mean you are safe; you could become a casualty.)

The attacks can be targeted or indiscriminate, sophisticated or through sheer brute force. Their intent can be to steal, corrupt, destroy, subvert, or modify data, programs, or IP. They can also intend to disrupt, embarrass, or simply prove that it can be done.

But you aren’t a security expert, you’re a programmer. You get paid to write code, not to worry about security. The trouble is, like the scenario above, an amazing number of security vulnerabilities start with the code you’re writing (or the tools you are using, or the framework your code runs on, or even the security compliance standard you already have to adhere to). Now, even the tightest code is no guarantee that someone won’t leave their laptop at the gym or open an infected file or guess your boss’ password is ”bigcheeze,” but writing solid code can give your organization a better chance of defending against attacks.
#!
Where to begin?
A number of companies that specialize in security products and services provide an idea as to the scope of the problems, what are the most common mistakes people make, and how you might avoid making the same mistakes over and over again.

Probably the biggest security challenge facing programmers is something they usually can’t control: upper management. Time after time the people interviewed for this story said that most companies don’t think about security (and don’t even want to think about security) until there is a serious problem, and even then they patch that one thing and just hope that nothing else goes wrong. It’s a reactive approach rather than a proactive approach. Security has to be built in from the very beginning in order to be the most effective, and it has to be implemented across the entire enterprise.

According to Mandeep Khera, head of marketing at Cenzic (www.cenzic.com), “We did a survey a month ago and we asked questions like, ‘What percentage of your applications do you test for vulnerabilities and how often do you test them?’ And most of the respondents said they test less than 10% of their apps for vulnerabilities and only once a year.

“Another question we asked was, ‘Do you spend more or less money on application security than you spend on coffee?’ And 70% said their coffee budget was larger than their application security budget. Therein lies the problem.

“Another question we asked was, ‘How many times have you been hacked in the past 24 months?’ And at least 75% of the respondents said they had been hacked at least once. I’ve asked many programmers why they aren’t doing application security, and time after time the answer is because their management says to them, ‘Let’s not worry about security because we will never be hacked.’ … And then I say, ‘How do you know you haven’t been hacked?’ That’s when they look at me and admit that they don’t know.

“The fact is, with most of the hackings that have taken place, it turns out the hackers had been in the system for months before they were discovered. So there is a disconnect. They’ve been hacked. They know they’ve been hacked and yet they’re not doing anything about it. You have to ask, ‘Are you being penny-wise and pound-foolish by trying to save a few thousand dollars now when it could cost you millions later?’ A serious attack could even cost you your business. We know for a fact that some company’s stock has dropped 70% within a week after they were hacked. So how much is your company worth to you?

“I think getting that buy-in from the top, starting a security program, and training your people are the critical things. It might seem overwhelming, but it’s not. You can easily start with one application and then expand to others. Pick your most critical application and concentrate on that and then move outward. It’s that first step people aren’t taking.”

Khera added that developers need to focus on the most critical vulnerabilities, and they need to understand that often they’re making the same mistakes over and over again, mistakes that allow such attacks as SQL injections, sessions management, cross-site request forgeries, cross-site scripting, and privilege escalation types of attacks that developers may or may not know how to code against.

“I think that is a big thing, to proactively identify vulnerabilities and then train the developers how to fix them,” he said. “Without training you’ll get nowhere. You have to ensure that senior management clearly buys into the fact that application security should be the No. 1 priority, because without that I can guarantee that you will get hacked. The only question is when.”
#!
It’s natural to react
The natural tendency is to react to problems rather than to be proactive about them, said Chris Wysopal, cofounder and chief information security officer at Veracode. “Almost always there is an incident before something is done. Sometimes there’s an incident and [a company] will fix that one app, but won’t do anything to fix the app that’s sitting on the server right next to it. It’s an ongoing problem and it’s a problem with all the code you’ve already written.

“With large software inventories, there is a tendency to focus most of your efforts on where you think your most critical app is: high-profile, public-facing applications that most of your customers use, or have the most financial impact or sensitive information, and so on. And if you focus the majority of your effort on those handful of applications while ignoring the rest of the applications that are there, maybe you don’t think the others are that big of a deal, but you’re still exposed because they’ve never had any security testing.

“But hackers look for any vulnerability to gain access. They’re going to look for the security vulnerabilities in those applications that are at a low level of security initially. So companies with thousands of applications are beginning to realize that they are going to have to scale their existing processes out to their entire inventory and start to protect those as well.”

Chris Eng, vice president of research at Veracode, added: “The other thing that we’re seeing is customers submitting software to us for analysis that is 30% to 70% composed of third-party code that they didn’t write themselves. And we’re seeing an increasing trend in enterprises requiring their vendors to have their software tested before it’s deployed or before they sign the contract to allow it to be purchased. This last thing in particular—the testing of the software supply chain—has only begun to pick up in the last few years. A lot of companies are still behind on that.”

 Wysopal made the point that if, as an organization, you ship a product and someone is harmed by a security flaw, it will be considered negligent if that organization can’t show it has security best practices in place. “So even if you can’t eliminate every risk, at least you can show that you were trying,” he said.

“The first step is figuring out where you are at—getting a baseline. If you haven’t done anything at all, then you don’t know how far along that continuum you are. Once you figure out what you have, who owns what, and what security levels if any are in place, then you can start to figure out where your priorities are, which things do you need to focus on first, and so on. I think testing to establish the baseline is a good first step.”

Gwyn Fisher, CTO of Klocwork, expanded on the fact that many programmers are still making the same easily avoided mistakes.

“The reality is that while the media and the public in general focus on new and shiny exploits (e.g. a website divulging personal information), the security domain is still dominated by old, well-known weaknesses in implementation (e.g. SQL injections) that allow a variety of well-documented attack patterns (e.g. spurious quotes, semicolons, logical clauses, etc.) to remain successful,” he said.

“So do you define a risk as an already exploited vulnerability, or is that as-yet unexploited weakness more or less of a risk than the one you know about? In thinking about security risk management, you have to think about investment leverage. By that I mean how much time and effort (and money, obviously) are you going to spend locking down your network around a known set of attack patterns versus fixing the software to remove weaknesses.

“A typical exploit reflects one exposure of an underlying weakness, as shown by exercising one particular attack pattern. That underlying weakness, however, might well exhibit tens or hundreds of vulnerabilities under pressure from a variety of different attack patterns. So fix the weakness and you’ve removed many times the risk from your environment and your users than simply blocking one exploit.”

Fisher went on to say that the same set of problems (SQL injections, parameter reflection, header-splittings, and script injections) that have been documented for more than a decade are the ones still leading the list of most frequently exploited weaknesses.

“It’s the same set that forms the core of the CWE Top 25, the same set that any two-minute Google search will give you more information on than you could possibly imagine,” he said. “So is there a light at the end of this particularly repetitive tunnel?

“I’m much more a fan of removing weakness than managing exploits, as I firmly take the stance that the investment leverage gained from weakness-removal so vastly outweighs any time/effort/money put into exploits as to make the latter laughable. As a counterpoint, however, and as was widely published in a study performed by one of our competitors several years back, the average developer pays way more attention to a report of an identified exploit than they ever do to a report of a weakness, however well-described in their code.”

Andy Chou, cofounder and chief scientist of Coverity, also spoke about the importance of code security, particularly when it comes to embedded applications.

“Embedded systems in general have particular vulnerabilities that they can be susceptible to,” he said. “Often these systems are written in C or C++, and they can have problems like buffer overflows and integer overflows that can ultimately lead to a vulnerability. And these types of defects in the software can be found and eliminated very, very early, almost as soon as the code is written.

“The analogy I like to use is all the investment in things that can help you after you’ve had a heart attack versus all the things you can do to prevent a heart attack in the first place. People don’t tend to pay attention to it until it gets pretty serious or they know they have a problem or a serious crisis, but people don’t tend to take a proactive approach unless they have a friend or relative that’s had a heart attack.

“Static analysis tools can help find vulnerabilities earlier. Unfortunately, one of the problems with a lot of these tools is that if you have a lot of false positives—like the tools that cry wolf too often and say there’s a security vulnerability in all these thousands of places (but mostly they are not security vulnerabilities)—then developers will just say, ‘This is just a waste of time,’ and they will tend to not use those tools.

“The difficulty is not giving the developers information; it’s giving them just enough information that they feel they can actually address the issues early without impacting their other job, which is to get the product up and running. I think that’s the real challenge because it takes time and energy away from other things the team has to deal with. Every development team I’ve ever seen is always under-resourced and overstretched just to deliver the functionality they’re supposed to. That’s the reality of what development teams are really paid to do.”

Chou added that an additional problem is that developers often don’t have competency in security. “A lot of companies just don’t have the people and technologies and processes that are reasonable for security. That’s difficult to judge from the outside,” he added. “So if you have people that don’t know what they’re doing, then it’s very easy to mess up what you’re doing for security in development in particular.

“A lot of developers in the past have not really considered security to be fundamental. There have been some companies, like Microsoft, that have tried to build new software development processes that put security fundamentally into the way that software is built, but that’s something that most companies just don’t do.

“I think that’s eventually going to change. There are processes and testing and methodologies out there that do all that, but it’s not very common today.”
#!
Security and the cloud
And when it comes to developing cloud applications, there are a whole new set of things to worry about when it comes to security. Carson Sweet, CEO of CloudPassage, talked about some of the security issues when developing for the cloud.

“There is a huge amount of development in the cloud these days. We’re seeing a lot of instances where a business manager wants to get some sort of presence in the cloud, so they go to their developer team or hire a developer and tell them to go set up a cloud server and write some applications. You have situations where a developer is spinning up one of these cloud servers—it’s very easy and simple to do since there’s very little for them to deal with—and in no time you’ve got a server up there and you have data that may or may not be sanitized.

 “This is a very big problem because folks are using live data to do development work even though the virtual server has never been secured, it’s in the cloud and it’s never been locked down. In an internal data center, we can get away with not really securing the server itself because we’ve got layers and layers of firewalls and intrusion detection, and all these things that protect the servers behind it. Even though it’s not the best practice, we can kind of get away without hardening the server. But with a server in the cloud that’s not the case.

“You don’t have the benefit of all those layers anymore,” Sweet continued. “That means the server itself has to be self-defending because the default state of a server is extremely vulnerable. If you put a server out into the cloud, usually it’s being attacked within 30 minutes. And if you don’t harden the server, you’ve got a real problem.

“So you end up with this application that, even if you’re doing great application coding with all sorts of testing and doing all the things that good application developers do to secure the application, you’re still building a castle on sand because the underlying server doesn’t have the protection it needs to keep from being compromised. This is a pretty huge problem. People are charging out with a bit of a cavalier attitude without thinking about how to protect the server itself.”

Further complicating the issue, Sweet said, is the fact that in the cloud, your own IT department doesn’t control the environment. He recommended finding out what a cloud provider will do in terms of security under its hosting agreement.

“There is usually a very clear demarcation between what a cloud provider will do and what they won’t, and they are very open about it. It’s called a shared responsibility model, and essentially the provider will deal with security up to the point where they hand the keys to that virtual machine off to the user, and then it’s up to them.

“The analogy I use is that it’s like an apartment building. The manager of the building will provide security for the grounds, the common areas, the elevators, etc., but once they hand you the keys to the apartment, the rest is up to you. They don’t know what you’re going to do with that key, how many copies you make, who you give them to, or where you hide them. They can’t do everything for you. So users need to understand that there is a shared responsibility.”

The difference between keeping data behind a firewall and in the cloud is like the difference between a castle, with walls around it and a single gate that everything passes through; and a village, where people can get in from a number of different directions, according to Sweet. To deal with that, companies are going to a more hybrid model.

“We’re beginning to see more of a hybrid model where a company may offer colocation of data in hardened servers somewhere where mission-critical data can be encrypted and stored, and they are using the more elastic qualities of the cloud to provide a front end to those servers. So a company can deploy their Web servers in a cloud environment, but their data is in a more secure place,” he said.

Another problem Sweet pointed out is that in the cloud, practically all the servers are virtual machines, so software encryption by definition is slower than hardware encryption. “How we build encryption software that is strong enough and fast enough to run in a virtual machine. That problem hasn’t been solved yet,” he said.
“And even if you encrypt sensitive data, you have to remember that as that software gets faster, so do the hacker’s machines. Let’s say you are a hacker that wants to run some massive attack requiring a lot of computing power. Why not pick a number of soft targets like virtual servers in the cloud that are not typically hardened the same as servers in a data center and use those servers to do your computations? That’s what the cloud was designed for:  massively parallel computation.

“You have to make security a cost of doing business,” Sweet aid. “That’s not necessarily a developer thing, but if you don’t make security a priority and bite the bullet, then you are putting yourself at risk. And it’s an ongoing thing. Even security measures have a life cycle, and they need to be monitored and updated.”
#!
It’s an attitude problem
Mano Paul from ISC2’s Application Security Advisory Board (www.isc2.org) echoed many of the points.

“There are three primary trends are taking place these days. One: Hackers are beginning to attack the application layer. A few years ago, hackers would attack companies because it was cool, they could bring down websites, maybe launch denial-of-service attacks, or cause some disruption to the business. But now they are doing it not to be cool but because of the tremendous amount of money that’s out there.

Having said that, we’re also seeing a new type of hacker profile like with Anonymous and LulzSec where it may not be simply cash they are after. They have a cause. So I like to say the hackers have moved from cool to cash to cause. In terms of the challenges, the hackers always have the advantage, and we need to keep up with this game.

“The biggest problem we see with companies is an attitude problem. ‘It hasn’t happened to us so we must be okay.’ But when the breach happens, it’s too late in the game. In the whole approach to security—particularly software security, which is the majority of the business these days—only recently are we beginning to see companies be more proactive.”

As software grows more complex, with layer upon layer of software, frameworks, APIs, and third-party software, organizations have the problem of counting on services and data that no longer are under their direct control. “We used to have the benefit of knowing something about the APIs and systems we were purchasing, but now we’re buying sub-systems to the systems, like DevForce or VM Systems,” said Paul. “We don’t even know what they are doing. They are bit like a black box for us.

“The challenge most companies face is they have to be secure, but they don’t fully understand the complexity of the security issues they have. Not that I want to try and motivate through fear, uncertainty and doubt, but the way that companies need to start looking at this problem is if they need to get ahead of the game. They can no longer have a myopic perspective on security itself. And what I mean by myopic is they do a little bit of this and a little bit of that—for example putting up a firewall and leaving it at that—as opposed to looking at security as the software gets built, right from the requirements down to the point of release and eventually retirement, which is what the ISC certification is all about.

“Today, the maximum bang for the buck will come from having educated and trained personnel or resources that can build secure applications, to run on secure hosts within secure networks because it’s not just an application issue, it is a host and network problem as well.”