Chip Morningstar is having something of an existential crisis. After 40 years developing software, co-creating the first graphical massively multiplayer game for the C64 in 1986 (Habitat), and stints as top levels of Yahoo and PayPal, Morningstar has been looking back on a life spent trying to solve similar problems over and over.
One of those problems is software security, and at Electric Communities back in the mid to late 1990s, Morningstar and his associates had a bit of an epiphany. “How do I control what an object can do? You have your piece of the virtual world: your fantasy world with people slaying monsters, and over here you have the virtual stock market where people are buying and trading on NASDAQ for real dollars,” said Morningstar. “How do you prevent someone from taking the axe into the stock market and taking someone’s portfolio? That was the core problem Electric Communities solved. E was an outgrowth of attempting to solve those problems.”
The E programming language was created by, among others, Mark S. Miller, who is currently a research scientist at Google and a member of the ECMAScript committee. He is also the creator of Miller columns, also known as cascading lists.
Using E and some strict software development discipline, Morningstar said that Electric Communities reaped tremendous benefits outside of just security. “We built all this highly security-conscious software, architectural patterns and coding practices with the idea we were building this ultimately secure virtual world server,” he said.
“That turns out to be a crappy business, so we pivoted, but we had all this stuff and all these habits of how to code and write software that was highly secure, and we discovered that even though in some of our new business undertakings we weren’t nearly as concerned about security. We got this enormous pay off in reliability and software quality and things like how long it took to get something working properly and how many bugs you had.”
But Morningstar postulated that the real problem with software security is even deeper than can be addressed with best practices and specialized languages. He said that a complete redesign of software architecture from the OS level up is likely required to solve the systemic problems with the Internet of Things and beyond.
For instance, Morningstar said that identity-based security is fraught with perils, as logins and passwords are easily stolen, and when they are used illicitly they can provide access to systems where privileges can then be escalated. Instead, he said a capability-based operating system, such as KeyKOS, could provide an example of the way forward.
KeyKOS, created by Tymshare in the 1970s, was created to run on IBM System/370 series machines, and it offered a capability-based nanokernel. It’s an entirely different model for an operating system, one where application must specifically be given access to things like system memory and disks, rather the default Unix permissions built around reading and writing. Morningstar said that all modern machines are effectively Unix-based, even Windows: The basic models used are essentially the same.
The problem is that, like a farmer growing only one crop, if a disease comes to town, all those crops will be affected because they are all the same plant. Morningstar said that this means exploits, viruses and worms all tend to take advantage of the same types of bugs, even though the operating systems tend to keep those exploits restricted to just one platform.
“It’s not that Unix is bad, it’s just that it is a monocrop,” said Morningstar. “It has certain assumptions baked in at the core that people don’t realize are design choices. They’ll argue Linux versus BSD versus Windows, when they’re all the same thing. Caring about OS details is important in a day-to-day way, but the fundamentals that underlie it are all the same. I would like to see at least a reawakening of the awareness that there are other possibilities.”
Morningstar said that, while servers and desktops aren’t likely to be switching out operating systems any time soon, embedded devices and their operating systems are experiencing a lot of innovation at the moment, and he suggested this may be where change can begin to take hold.
Earlier this year, Brian Behlendorf was appointed to be executive director of the Hyperledger Project at the Linux Foundation. It’s an effort to build blockchain technology (like the kind that powers BitCoin) into a tool for security and authentication. The technology is being designed to help authentication problems in the world of Internet of Things.
Behlendorf said that he envisions the future of software security coming from the insurance industry. He said that software security problems cannot be solved in a manner that will stifle innovation on the Internet. He also advised that certifying devices as “secure” isn’t necessarily a solution on its own, either.
“The Internet has benefited from trustless innovation,” said Behlendorf. “Suddenly requiring certification to be on a network becomes an easily abused tool to control content and what kind of innovation occurs. On the other hand, I think people who own networks should be able to know what’s on those networks and set policies for those networks. There shouldn’t be one certifier; we should have more than one. And even insurance policies: one for the home you get if you only use devices certified by some set of arbiters, and another insurance if you want to run anything.”
Behlendorf’s primary concern about device certifications is that open-source projects would be left out in the cold without the funds to pay for certification. He fears a world in which consumers would invalidate the certification of their devices by using open-source software and firmware.
Behlendorf also advised that device makers could be held accountable for the security of those devices if enterprises were suddenly buying insurance to cover them. With insurance in place, enterprises would be incentivized to only use devices that were deemed by the insurance provider to be compliant or safe.
In fact, he said, open-source practices could be helpful in solving problems like the Mirai botnet. When rogue devices are vulnerable in the wild, is it acceptable to hack them and upload a community-made firmware that eliminates the vulnerability, or simply turns the device off until it’s updated?
“If we always have the right to modify the software running on cars, watches, phones, IP cameras, etc., we can have a culture that makes it commonplace and acceptable to be updating this software,” said Behlendorf. “[For device owners], the owners can abdicate responsibility. They can say it’s the company’s fault. If you said there’s penalties for dumping botnet traffic onto the network, and you can fix that with patches and alternative firmwares, then we have a path out of this.”
At QCon San Francisco, Jason Chan, director of engineering for cloud security at Netflix, gave a talk entitled “The Psychology of Security Automation.” In it, he detailed many of the principles and design goals Netflix used to ensure its systems weren’t exposed to attacks.
Chan said that development and security teams tend to be at odds. Developers want to go fast and break things, while security administrators generally want nothing to happen, anywhere. For the development team, movement is a good thing, but for the security team, success is silence.
Thus, integration between the two teams is a major key to success. The way Netflix approached this was to push security teams to integrate their tools into the existing developer workflows. They shunned the idea of adding a new console or dashboard, as it would be difficult to get developers onboard with using it.
The second major principle that helped the Netflix team was called Security++. This practice, said Chan, prioritized security efforts that would also improve the application from other sides. Thus, if work on a security bug would also yield a performance benefit or better stability for the application, it would be a priority for both teams.
Chan said that being transparent is also key for security teams. Developers need to know what is being automated, and why exactly something isn’t being accepted because of security reasons. This, coupled with an ultimate goal of reducing cognitive load on developers, combine to make security easier to implement across an organization, he said.
“What we would like to do is to have the ability to create and encapsulation compartment that code can run in, that’s born with these lockdown things already there, and it’s natively part of the runtime engine and the language,” said Morningstar. “That can be provided much more efficiently because it knows in advance it’s going to do this, and there are implementation tricks you can use to make this fast if it’s baked into the environment.”
Whatever the solution, we can all expect developers will have to pay a lot more attention to security at all levels as time passes.