It’s a scenario right out of a Bond movie. James is charging down a hallway, parkouring over bad guys, shooting everyone he sees in a mad dash to get to the glowing computer screen in a dark basement under the villain’s hideout. Inside that computer: stolen information. Maybe it’s a list of other agents. Maybe it’s nuclear secrets. Or maybe it’s a trove of private e-mails.

If there were any doubt left about the importance of cybersecurity, it most certainly was washed away after the United States’ 2016 election. Hacked e-mail servers and Russian cyber-meddling are still being discussed even after the dust has settled. A topic few people understand or can even describe properly is, perhaps, the most important and individually threatening matter of our day.

(Related: How to get SaaS security right the first time)

Building so bleak a picture is not bellicose or melodramatic; it is in fact a difficult situation to overstate. From wild IoT devices being harnessed and bandied like so many jumpsuited henchmen, and daily, even hourly attacks on almost every potentially vulnerable surface exposed to the Internet, it’s a scary time to be a software developer.

It’s positively Machiavellian, the possible vectors of attack in our modern, target-rich environment. Christopher Walken’s Bond villain Max Zorin just wanted to blow the entire Silicon Valley into the Pacific Ocean so he could corner the market on chips. It never even occurred to him to use his computer empire to take over the world in a far subtler way.

In truth, the era of SPECTRE and uniquely festooned bodyguards climbing buildings to steal documents are over. We’re now in the era of Alan Cumming’s Boris from “GoldenEye.” In our modern dystopia, Target’s millions of customers are taken by a mastermind, half the Internet is kicked offline by zombie IoT devices, and an entire political party has its body of e-mails dumped into the public domain at the behest of a foreign power.

Regardless of which party is in power, there have been and will continue to be governmental pressures to deal with the rising tide of computer security threats. The ever-looming danger on the horizon is one of legislation enacted without a full understanding of the problem, such as has happened with the Computer Fraud and Abuse Act, a federal law that can make it a crime, in certain cases, to violate a terms of service agreement.

The Federal Trade Commission has, for some time, been putting the screws to router manufacturers that make dishonest claims about their security offerings. Just last month, the FTC set its sights on D-Link as a vendor not supporting its old devices with software patches and security updates. In February 2016, the FTC settled with Asus over router insecurity, as well.

Jessica Rich, director of the FTC’s Bureau of Consumer Protection, said, “Hackers are increasingly targeting consumer routers and IP cameras—and the consequences for consumers can include device compromise and exposure of their sensitive personal information. When manufacturers tell consumers that their equipment is secure, it’s critical that they take the necessary steps to make sure that’s true.”

It’s a difficult climate, and one that is rife for liability, potential legislation, and, no matter what, bad publicity. Anything and anyone touching your software development process needs to be thinking about security first at all times. Otherwise your entire company could be on the wrong side of a very bad cyber threat.

Compounding the issue is the perception that security in the software development life cycle is a burden; security is often seen as a trade-off with velocity. It’s enough to paralyze a development team with scheduling nightmares.

There is hope, however, in the form of DevOps. As the DevOps and container revolutions continue to spread through enterprises, Tim Jarrett, senior director at Veracode, sees a window for change.

“What a lot of security folks are seeing is that this is an unparalleled opportunity to say, ‘As long as you’re automating your software development process anyway, let’s take the time to automate the security testing in there as well,’” he said. “They’re saying this is our change moment, this is where we can take the opportunity to build testing right into that process so it’s not an additional tax on the development team.”

That’s not just confined to technology industries, either. Jarrett said that over the course of 2016, his company saw its products sell into 34 industries, up from 25 the year before. The driver: an expanding understanding that enterprise applications, no matter the vertical, are at risk.

“The growth is all over the map, and in a bunch of places you wouldn’t expect,” he said. “We’re seeing retail, which have traditionally focused their defenses against attacks at the endpoints and network layer, are starting to look seriously at their application security as well.”

Whole hog or just the bacon?
The real problem with baking security into a development process is that it really has to be Job One—so much so that, often, the vital practices and procedures have to be included in the development process from the get-go.

That’s not always an option for legacy development processes, said Jarrett. “If you’re a Netflix or someone starting with very little technical debt, you can take an approach where you are nuking servers in production periodically to see what happens,” he joked.

Legacy applications don’t have the luxury of having been built with security in mind from Day One, however. No matter the application, though, Jarrett said there is one major factor that determines success or failure.

“Most of where the approaches we’ve seen either succeed or fail is in getting developers bought in,” he said. “It’s about convincing them that this is something they need to spend time on. A couple things come together for ingredients of that. One is providing them with a way of measuring that, and to tell them what the extent of the problems are. Some of that is tooling, but also part is having people who can tell them what the tools are doing.”

Chip Morningstar has been grappling with this problem for almost 40 years. As a young developer, he and Randy Farmer built the first graphical massively multiplayer game, Habitat, in 1986 for the Commodore 64 and QLink online service. More recently, he was principal architect at PayPal.

In the 1990s, however, Morningstar and his gang of MMO pioneers attempted to build a freely programmable online world at a company called Electric Communities. While the company never succeeded, the security technology developed there by Morningstar and Mark S. Miller (currently a research scientist at Google and a member of the ECMAScript Committee) is still relevant. Miller even designed a new programming language for the company’s work: E.

Morningstar said that the core problem at Electric Communities is still a core problem for most enterprise environments today. “How do I control what an object can do? You have your piece of the virtual world—your fantasy world with people slaying monsters—and over here you have the virtual stock market where people are buying and trading on NASDAQ for real dollars,” he said.

“How do you prevent someone from taking the axe into the stock market and taking someone’s portfolio? That was the core problem Electric Communities solved. E was an outgrowth of attempting to solve those problems.”

Using E and some strict software development discipline, Morningstar said that Electric Communities reaped tremendous benefits outside of just security. “We built all this highly security-conscious software, architectural patterns and coding practices with the idea we were building this ultimately secure virtual world server. That turns out to be a crappy business, so we pivoted. But we had all this stuff and all these habits of how to code and write software that was highly secure, and we discovered that even though in some of our new business undertakings we weren’t nearly as concerned about security, we got this enormous payoff in reliability and software quality and things like how long it took to get something working properly and how many bugs you had,” he said.

For modern enterprise Java developers, Joe-E, a subset of Java designed to foster the same security principles as E, is still available.

Morningstar is also working with Miller to bring this type of security into JavaScript. Together, they have built and proposed the Frozen Realms project, an API designed to enable developers to build JavaScript applications in a far more secure manner than can be implemented today. But it requires some changes to the language at the ECMAScript level.

Underneath this language layer, however, is an even more dangerous problem for the world of software development and corporate computing: the danger of a unified architecture. Morningstar said that the adherence to the Unix model of operating systems has doomed the world to a massive, ongoing security problem.

“It’s not that Unix is bad; it’s just that it is a monocrop,” said Morningstar. “It has certain assumptions baked in at the core that people don’t realize are design choices. They’ll argue Linux versus BSD versus Windows, when they’re all the same thing. Caring about OS details is important in a day-to-day way, but the fundamentals that underlie it are all the same. I would like to see at least a reawakening of the awareness that there are other possibilities.”

Thinginess
When it comes to building software securely, there are some basics that can be instilled from the management layer to take care of developer management. One of the keys to succeeding with this level of management, said Veracode’s Jarrett, is figuring out when policy should lead, and when it should be left up to the developers.

That means “making sure there’s some sort of common agreement on what the minimum bar is. In the organizations we’ve seen this be most effective, that’s generally some policy that says ‘Thou shalt fix these, these, and these types of issues,’ and the rest is left up to the team. If you have that one central policy from the top, you’re not having to negotiate that every time,” said Jarrett.

Another big piece of the security development puzzle is making sure the developers have access to the information they need. “We’ve seen success from organizations that turn developers into security champions and sending them to training. They can be the voice inside for security,” said Jarrett.

When it comes to the Internet of Things, however, the problem isn’t necessarily one of insecure code being shipped. It’s that those things will become insecure over time and never be patched when a vulnerability is found. This is the crux of the FTC’s complaints against consumer router makers.

And it’s a problem Brian Behlendorf is trying to solve through his work as the executive director of the HyperLedger Project at the Linux Foundation. The project seeks to utilize blockchain technology, such as the kind behind BitCoin, to solve authentication problems in the world of IoT devices.

In his view, the problem of rogue, unsupported devices in the wild is not something that will solve itself among vendors, nor will it be solved by the users. Instead, he envisions one of two scenarios: the first would involve legislation, and the second would involve liability and insurance.

“The Internet has benefited from trustless innovation,” said Behlendorf. “Suddenly requiring certification to be on a network becomes an easily abused tool to control content and what kind of innovation occurs. On the other hand, I think people who own networks should be able to know what’s on those networks, and set policies for those networks. There shouldn’t be one certifier; we should have more than one, and even insurance policies. One for the home you get if you only use devices certified by some set of arbiters, and another insurance if you want to run anything.

“If we always have the right to modify the software running on cars, watches, phones, IP cameras, etc., we can have a culture that makes it commonplace and acceptable to be updating this software. [For device owners] the owners can abdicate responsibility. They can say it’s the company’s fault. If you said there are penalties for dumping botnet traffic onto the network, and you can fix that with patches and alternative firmwares, then we have a path out of this.”