No matter how good your perimeter security is, experts agree: Your system has been breached, whether you know it or not. The costs of security flaws—cybersecurity expert Joe Franscella calls them “The Five Horsemen of the Internet Apocalypse: Scam, Extortion, Embarrassment, Theft and Death”—are enormous. So why don’t we consider security a first-class citizen in DevOps?
“Security is still one of the last places where that archaic approach of development handing off the software to a different team and walking away still reigns,” said Tim Buntel, vice president of products for XebiaLabs. “Secure software is just good software, and good software is secure software. Everything that we’re doing in DevOps is allowing us to build better software at scale and release it faster.”
But building security in from the start rather than tacking it on at the end “takes a lot more than getting the security guys to attend standup meetings,” Buntel continued. All too often, he said, even large regulated industries have a tiny cadre of security experts vetting a fraction of a huge portfolio. And these enterprise static analysis runs can take days.
(Related: Putting the test back in DevOps)
For this DevOps Buyer’s Guide, XebiaLabs, along with Microsoft, Dynatrace, CollabNet, Appvance and CloudBees, spoke with SD Times about best practices for Rugged DevOps (a term coined by DevOps author Gene Kim) or DevSecOps. All agree that the time is ripe for adding security scans and stack analysis earlier in the DevOps workflow and mitigating malicious activity. To paraphrase Bruce Schneier, software security may be getting better—but it’s getting worse faster.
Is there a move toward Rugged DevOps?
The open-source Jenkins Continuous Integration (CI) platform has had a pivotal role in the DevOps tool chain and even the cultural lore. CI today, however, is just one piece in the DevOps Continuous Delivery pipeline. Sacha Labourey, CEO and founder of CloudBees, which commercializes Jenkins, has seen his own company paralleling the evolution of DevOps.
“We saw it through the fast adoption of Continuous Delivery, which led to increasingly sophisticated ‘flows’ being implemented on top of Jenkins, he said. “Consequently, about two years ago, we initiated the development of what’s now known as Jenkins Pipeline, a core feature of the newly released Jenkins 2.0. We also see an increased use of Docker, since it makes it very easy to have the exact same container used in development, testing and production. To that end, we also contributed a lot of features back to the Jenkins community.”
With a large target on its back, Microsoft has focused on security for years. Today, CEO Satya Nadella encourages a “live site culture,” or production-first mindset.
“Part of that mindset is saying, ‘Anytime I see something go wrong, it’s an opportunity for learning. Anytime I see a breach in security, I need to ask what can I do so this doesn’t happen again.’ How can we shorten our detection time, improve mitigation, and limit the radius of users affected?” said Sam Guckenheimer, product owner for Visual Studio Cloud Services at Microsoft.
Those questions are more common today in part thanks to the movement that began 15 years ago, said Guckenheimer. “You had in 2001 the Agile Manifesto: Build software in potentially shippable increments. In 2007, you had 10 deploys a day at Flickr. I think DevSecOps is next,” he said.
What’s holding us back is cultural, but it’s also technical. “Part of the problem is that most security tools are too slow to work in a Continuous Integration model,” said Guckenheimer. “Checkmarx is probably the tool that’s cracked that first. Ideally, you want to be able to have your code scanned as part of the pull request in the Continuous Integration flow, and that’s just not practical with most tools that exist.”
Increasingly automated software delivery tool chains and pipelines can become critical assets similar to the “infrastructure as code” concept. But all the vendors interviewed agreed that Rugged DevOps is primarily a cultural effort. “Tooling needs to help make that happen, but won’t lead it,” said Labourey.
Combatting apathy, enforcing empathy
At Microsoft, one method of instilling application-level security in team culture is via war games waged on software in production. Red teams are attackers, blue teams are defenders, and a referee verifies findings and lets the blue team know if they have thwarted a red team attack or a discovered a genuine external threat. “There are rules of engagement: You can’t compromise the customer SLA (service level agreement), you can’t exfiltrate data, you can’t damage the database or bring down the service, but as the red team, you prove that you can get right to that point,” said Guckenheimer.
While some Microsoft teams, such as Azure public cloud, do war games continuously, “For us it’s more like quarterly. We do not have a permanent red team; we rotate them,” said Guckenheimer. “We do have a permanent blue team who are real defenders. The goal is to make them better. When you do a retrospective on these things, everyone comes and listens.”
As a result of war games, Guckenheimer lives by basic security rules:
• Use just-in-time administration
• Use multifactor authentication
• Manage and rotate secrets via key vaults
• Use a DevOps release pipeline
• Destroy compromised instances
• Don’t tip your hand to attackers
• Segregate domains; don’t dual-home servers
• Use different passwords
• Don’t use open file share
• Assign only one admin per workstation
• Think before clicking links (to stop phishing)
“Shift left” is the mantra for DevOps, and security is no exception, according to Appvance CEO Kevin Surace. “DevOps means shifting everything left, including app penetration and DDoS (distributed denial of service) testing,” he said.
“It’s great to do once-a-year tests outside or have a security center of excellence. But any build can and does add security risks, which need to be found and evaluated. Source-code scanning should always be run, but you won’t be able to find everything until you execute use-case-driven [app penetration testing] at every build or at a minimum for each release candidate.”
War games and penetration tests are fun, but how do you create that empathetic connection between development and security? One controversial technique used to create empathy is giving pagers to developers so that they feel the pain of late-night operations snafus. Is there a similar approach that could happen with security?
“I don’t like the pager idea,” said Andreas Grabner, a developer advocate for Dynatrace. “What I like is that the team itself, we don’t deploy after 1 p.m. Why? Because we monitor. We still have three to four hours before we go home to figure out if that was a good or bad deployment. If at 1:30 p.m. we see an impact on end users, then we can say we introduced a bad deployment, or we roll back to a previous state.”
A pipe dream for low-tech companies
“These days, every business in the world relies on software to do business, but only a small percentage are actually software companies,” said Grabner. “They have to become software-defined businesses, but there’s not enough talent in the world to go around. That’s why the only way out of this is with solid automation and detecting all these problems in your pipeline.”
But is a Continuous Delivery pipeline with security gates even possible for many organizations? “That’s what everyone wants, but it’s very far away,” Grabner admits. Test automation is still in its infancy. “But the awareness that quality needs to be a core part of development is extremely increased.”
The concept of quality gates comes from Toyota’s iconic production line innovations, where any worker can stop the line if a quality check fails. In the case of software pipelines, according to Grabner, the automated quality gate can track architectural metrics such as the number of database queries executed, the number of web services calls, memory usage and more. “What we do with these quality gates is, we are detecting regressions caused by changes pushed on the pipeline: Something has changed from the way it used to be,” said Grabner.
“This feature consumed x amount of memory, and now it consumes y. If it has a negative impact, we need to stop the pipeline. This is what we call metrics-driven Continuous Development.” Teams can also aim to improve the mean time from finding the issue to fixing it.
Monitoring deployed software is key. “We always also combine it with synthetic monitoring. That means if I deploy a new feature, I can monitor how real users use that feature, while synthetic monitoring checks the feature every 10 minutes,” said Grabner.
Securing components not your own
What about the code you didn’t write? How do you add security to 90% of code that is third-party components or open source? “Never just assume security and do use a governed adoption process and DevOps tools that support that,” said Ward Osborne, information security officer at Collabnet. “Limit your use of third-party to what you need. Test it. Disable all the stuff you don’t need at the start. Go through security testing.
“Back to the empathy question, if open source is important to your work, then it is good to establish relationships with the creators of the code and help them make it more secure—that is always a plus. Going forward, what we will see, as security becomes more integrated into development processes, is that open-source code will become more secure as well.”
There’s no excuse for playing fast and loose with frameworks, components and libraries, according to Guckenheimer: “Anyone worth their salt these days will only use trusted libraries. There are companies that specialize in that: WhiteSource and Black Duck and Sonatype, who will try to ensure that you are using trusted versions.”
Further, the pipeline also helps enforce policies around trusted components. “Presumably, you don’t consume anything, by policy, that isn’t acceptable because of known vulnerabilities and unsuitable maintenance on its side. These policies are reasonably easy to enforce with tooling,” said Guckenheimer.
Automation: If it hurts, do it more
The vendors surveyed agree that one baby step must happen before achieving the nirvana of a perfectly built app: test automation. “You need to aim for total automation, and if it hurts, it probably means you need to do it more, not less,” said Labourey. “That’s the only way to reliably and deterministically build products and make sure nobody can intrude through that process.
“Some initial reactions lead to thinking that automation will reduce security and just accelerate the deployment of buggy and insecure applications to production. Au contraire: If the right process is applied, automation behaves exactly like a boa constrictor, increasingly constricting any space left for human error, making it possible to reliably inject quality and security improvements through the process.”
But it will take a cultural change to get those who talk about DevOps and those who talk about security to do the unthinkable: eat lunch together at the next RSA conference. Creating virtuous loops, training consistently around phishing and other exploits, employing quality gates, scanning code and searching for anomalies is never-ending, but you’d better get good at it: It’s no longer optional.
DevOps war stories
SD Times asked security experts what their most frightening app-level security problem they’ve ever seen in their professional life, and what were the outcomes.
Ward Osborne, information security officer, Collabnet
“Complete indifference to security as a whole, and lack of understanding of security and how to build it in are the general components there. For example, some years back a major financial institution had failed an audit of the development center. The specific application that was flawed was an ATM platform, so every ATM essentially had a built in backdoor. The audit found weaknesses across CM, lack of peer reviews, and no location- or roles-based controls, which meant contractors could check out code, work on it at home and check it back in.
“It took six months to reengineer security into the ATM platform. This was very expensive: tens of millions of dollars. Had the model of methodology plus training plus tools been utilized to enforce best practices, this could have been prevented.”
Tim Buntel, vice president of products, XebiaLabs
“Honestly, the kinds of things that I’ve seen over the years are distressingly simple in many cases. SQL injection continues to plague so many applications.
“The presence of private keys, Amazon Web Services tokens, database credentials and credentials for third-party APIs in public repos is a common problem. I believe that was behind the Ashley Madison hack. Uber had an exploit that was based on a key stored in an available repo. That’s starting to change: Windows Azure has added a key vault that gives you a nice way to securely manage your keys.”
Kevin Surace, CEO, Appvance
“We witnessed a financial services company where under certain circumstances, particularly under heavier loads, a user would log in to their account and access other people’s data. In the end this was a caching issue with a pointer not moving fast enough when the system was at capacity. But this is the kind of thing that could have cost that company embarrassment and even financial losses or legal action. This begged for performance testing early in the cycle and often, looking for situations where response data does not match the expected under nominal and even extreme load conditions.”
Andreas Grabner, developer advocate, Dynatrace
“An example from our own organization happened to us a couple weeks ago. We provide a free version of our product, and someone used a security hole in our own signup form for malicious link injection. We became a spambot for them. That was scary, but we have monitoring in place, and we found out that number of requests from a specific geographic location jumped like crazy—in our case, in China. We saw business hours and IP addresses in China. Because we capture all the parameters, we saw they were using malicious link injection. So we saw spikes of load with malicious links.
“That allowed us do two things: Talk with the ops team about blocking that IP address, and talk with the dev team about not allowing just any kind of text to be filled into that form.”
Sam Guckenheimer, Visual Studio Cloud Services, Microsoft
“Any company of significance needs to assume that they are breached. Phishing is often very not random, very targeted. The Sony hack in 2014 was a case where the attackers were very sophisticated in targeting individuals. They like to target sys admins because they have admin credentials and network access.
“In terms of general DevOps goodness, if you can redeploy from the bare metal up very quickly, then it’s much easier to get rid of any attacker than if you have an infrastructure that stays in place for months or years. If you have static infrastructure and rigid change management, you’re in trouble. If you look at all highly publicized attacks, they all have that characteristic.
“The thing about APTs (advanced persistent threats) is, these are highly sophisticated agencies willing to do months of reconnaissance and stay undetected indefinitely if they can. The goal is often to plant themselves inside a network and just stay there.”
Sacha Labourey, CEO and founder, CloudBees
“At JBoss, over time, we had a number of security issues detected in the application server. Much like for any platform provider (operating systems, application servers, etc.), the situation is pretty stressful as you know that fixing the bug is only the first and probably easiest part of fixing the problem. The fixed binary now has to be deployed in dozens or thousands of clusters, which might take a very long time, and you don’t really control that part. This makes it pretty stressful as you can discover unpatched instances long after the problem was fixed.
“Sometimes, those companies simply didn’t react or didn’t know about the issue, information got lost. But in some cases, companies can make the conscious decision not to upgrade and wait for the next upgrade cycle, due to the fear of introducing instability in their systems. This is where having a fully-automated Continuous Delivery environment can hugely increase security as it makes it possible to test the new patched environment for a very low cost, in very little time.”
A guide to ‘Rugged DevOps’ offerings
Appvance: The Appvance Unified Test Platform (UTP) is designed to make Continuous Delivery and DevOps faster, cheaper and better. It features the ability to create tests, build scenarios, run tests and analyze results; a codeless recording environment; a full test suite; and multiple deployment options. In addition, Appvance UTP allows users to work with their existing tools, write once, and aims to provide a beginning-to-end testing solution.
Atlassian: Atlassian products accelerate delivery pipelines and amplify feedback. Teams have full visibility into their delivery pipeline thanks to JIRA Software, Bitbucket and Bamboo. Teams monitor operations via HipChat integrations and JIRA Service Desk, then collect and organize that input in Confluence to build a shared understanding of customers’ pain points.
BlazeMeter: BlazeMeter aims to fill an important gap missing in the Continuous Delivery pipeline: performance testing. The company helps keep up with demands modern software delivery teams have to deal with by making load and performance testing part of any workflow. The BlazeMeter solution features the ability to create and control tests using an automation-friendly domain specific language (DSL), run locally or from any of 30 cloud locations at any scale from a single test plan, receive real-time reporting and analytics and integrate via API and CLI to any other solution.
CA Technologies: CA Technologies DevOps solutions automate the entire application’s life cycle—from testing and release through management and monitoring. The CA Service Virtualization, CA Agile Requirements Designer, CA Test Data Manager and CA Release Automation solutions ensure rapid delivery of code with transparency. The CA Unified Infrastructure Management, CA Application Performance Management and CA Mobile App Analytics solutions empower organizations to monitor applications and end-user experience to reduce complexity and drive constant improvement.
Chef: Chef Enterprise delivers a shared repository of code for automating applications and resources. The solution provides a way for development and operations teams to collaborate and move at the speed of the market. It includes role-based access control, centralized reporting, activity monitoring, an enhanced management console, and multi-tenancy.
CloudBees: CloudBees, the enterprise Jenkins company, is the Continuous Delivery (CD) leader. CloudBees provides solutions that enable DevOps teams to respond rapidly to the software delivery needs of the business. Building on the strength of Jenkins, the world’s most popular open-source CD hub and ecosystem, the CloudBees Jenkins Platform provides a wide range of CD solutions that meet the unique security, scalability and manageability needs of enterprises.
CollabNet: CollabNet offers TeamForge, the industry’s No. 1 open application life-cycle-management platform that helps automate and manage enterprise application life cycle in a governed, secure and efficient fashion. CollabNet delivers enterprise software collaboration, life-cycle tool integration, and visibility to an expanded marketplace that must efficiently manage distributed agile implementations and DevOps initiatives. Leading global enterprises and government agencies rely on TeamForge to extract strategic and financial value from accelerated application development and delivery.
Dynatrace: Dynatrace offers products to help DevOps teams make more successful deployments by identifying bad code changes early and providing collaboration features to communicate success and failures between business and engineering. Dynatrace Application Monitoring automatically detects problems in production, traces end-to-end transactions, identifies end-user impact, provides code-level visibility for root cause diagnostics, eliminates false alarms, and can be automated into the Continuous Delivery processes to stop bad builds early in the delivery pipeline. Dynatrace User Experience Management monitors end users and all their interactions on their devices. It provides crash reports for mobile native apps, user behavior analysis and root cause analysis when bad user experience impacts behavior.
Electric Cloud: Electric Cloud is the leader in DevOps release automation. We help organizations developing enterprise web/IT, mobile and embedded systems applications deliver better software faster by automating and accelerating build, deployment and release processes at scale. Leading organizations like Cisco, E-Trade, Gap, GE, HP, Intel, Lockheed Martin, Qualcomm and Sony use Electric Cloud solutions and services to boost DevOps productivity and agile throughput, while providing a scalable, auditable, predictable and high-performance pathway to production.
IBM: IBM Bluemix provides the easiest, fastest on-ramp for any developer to create next-generation apps for the enterprise with IBM Cloud. Bluemix has grown exponentially since its launch in 2014, rapidly becoming one of the largest open public cloud deployments in the world. Onboarding more than 20,000 new developers each week, Bluemix currently offers more than 140 services and APIs—including advanced tools in cognitive, blockchain, Internet of Things, analytics and more—to design next-era, competitive apps which use data in new ways. Combining the power of these high-value services with the instant and easily accessible infrastructure of IBM Cloud, Bluemix is continuously delivering to developers—rapidly defining and adding what they need to build and iterate quickly.
JetBrains: TeamCity is a Continuous Integration and Delivery server from JetBrains (the makers of IntelliJ IDEA and ReSharper). It takes moments to set up, shows your build results on the fly, and works out of the box. TeamCity will make sure your software gets built, tested, and deployed, and will notify you on that the way you choose. TeamCity integrates with all major development frameworks, version-control systems, issue trackers, IDEs, and cloud services, providing teams with an exceptional experience of a well-built intelligent tool. With a fully functional free version available, TeamCity is a great fit for teams of all sizes.
Microsoft: Visual Studio Team Services, Microsoft’s cloud-hosted DevOps service, offers Git repositories; agile planning; build automation for Windows, Linux and Mac; cloud load testing; Continuous Integration and Continuous Deployment to Windows, Linux and Microsoft Azure; application analytics; and integration with third-party DevOps tools. Visual Studio Team Services supports any development language and is based on Team Foundation Server. It also integrates with Visual Studio and other popular code editors. Visual Studio Team Services is free to the first five users on a team, or to users with MSDN.
Serena Software: Serena Deployment Automation bridges the DevOps gap by simplifying and automating deployments and supporting Continuous Delivery. With Deployment Automation, teams can deliver efficient, reliable and high-quality software faster while reducing cycle times and providing feedback. Features include the ability to manage test and production environments, deployment pipeline automation, tool-chain integration, inventory tracking, the ability to create and visualize end-to-end deployment processes, and a reliable and repeatable process.
Tasktop: Tasktop integrates the tools that software delivery teams use to build great software. Tasktop Sync provides fully automated, enterprise-grade synchronization among the disparate life-cycle- management tools used in software development and delivery organizations. It allows practitioners in various disciplines to collaborate on the artifacts and work items they create while operating in their tool of choice. This enhances efficiency, visibility and traceability across the entire software development and delivery life cycle. Tasktop Data collects real-time data from these tools, creating a database of cross-tool life-cycle data and providing unparalleled insight into the health of the project.
XebiaLabs: XebiaLabs’ enterprise-scale Continuous Delivery and DevOps software provides companies with the visibility, automation and control they need to deliver software better, faster and with less risk. Global market leaders rely on XebiaLabs software to meet the increasing demand for accelerated and more-reliable software releases. For more information, please visit www.xebialabs.com.