I’ve previously written about the “Three T’s” of shifting security left: training, tools, and teamwork. In this blog, we’re going to delve down a level and look at some of the tools needed to shift left, what they do, and where in the software lifecycle they belong. The lifecycle question is important to think about because, even when you’re using DevOps principles, there are often still handoffs between teams as an application or service is developed, deployed, and then managed.

What all teams share, however, is the need for high-quality security services, delivered in an appropriate way for their part of the process. Although everyone’s overall mission might still be ‘move fast, but don’t get hacked’ each part of the process has different needs.

Development phase
The end result of traditional software development has typically been the production of binary that was installed (usually by a different team) onto a pre-existing operating system (probably built and configured by yet another team).

Modern software development, however, now often results in a more complex set of artifacts including infrastructure-as-code templates, built container images, or an archive of library references and code to run a serverless function. The idea is that a development team can create most of the assets required to execute the service or function that they are responsible for.

While these new ways of working cut down hand-offs between teams and improve productivity, there are a lot of new attack surface areas that now fall within the domain of the development team. This problem is also compounded by the resulting new architectures like service mesh and serverless, that often don’t work well with existing security controls that their ‘friends’ in the security teams are responsible for.

The end result is that development teams need new tools and techniques (in addition to their existing secure coding and code vulnerability scanning tools) to produce high quality, secure applications or services.

What should these new tools look like? What should they do? Let’s outline some key characteristics. Tools for the development teams should:

  • Be available in the integrated development environment (IDE) that developers are using to create the various code types
  • Be integrated into the workflow tools (like Jenkins, for instance) that build, test and package the resulting software
  • Have domain expertise from the security team encoded as rules and tests, rather than requiring deep security expertise on non-functional code areas like infrastructure-as-a-service (IaaS)
  • Protect new attack surface areas like containers, infrastructure templates and serverless
  • Analyze libraries and frameworks
  • Provide visibility across the stack and feed into wider organizational security management
  • Should (optionally) not require a separate platform to manage and maintain (i.e. be delivered as a service.)

Deployment
Whether the responsibility for deploying software into production rests with a developer team, a separate site-reliability engineering (SRE) team or a traditional operations team, there are some additional operational requirements when your application or service is exposed to the big bad world at large. Primarily we need to worry about managing traffic in and out of the service, inspecting it for threats or exfiltration attempts and providing appropriate restrictions.

The same changes that have affected the software development phase of the lifecycle also drive new requirements in production. As we seek to minimize hand-offs, tickets, and manual toil, we need solutions that integrate into platforms and workflows, are automated, and that enable security teams to contribute expertise in advance, rather than as a delay-prone handoff in the process.

In more traditional architectures, with less dynamic infrastructure, traditional web application firewalls have been used extensively to protect against application layer threats like cross-site-scripting (XSS) or SQL injection (SQLi). And indeed, even if your service is deployed as serverless code, it doesn’t automatically inherit immunity from common attacks. It’s still the quality of your code, and the security services you provide that will, ultimately, protect your applications and customer data. The kind of services provided by web application firewalls (WAF) and other tools are still very relevant, but the deployment methodology and architecture needs to adapt to modern architectures and platforms.

As in the development phase, it seems appropriate to outline the characteristics of the protection architecture you need. Tools for the deployment phase should:

  • Protect from common threats in production – primarily from malicious client traffic
  • Be deployed automatically based on tags or flags in the application
  • Protect consistently across all run-time platforms
  • Centralize policy and protections so your security team can create rulesets and protections that are deployed based on characteristics of your application

Incident Management and Audit
Your application might have been securely coded and deployed with great defenses, but there are still a lot of ongoing management and audit tasks. Especially in a mid- or large-sized cloud environment, where multiple cloud accounts, logging, audit, and incident responses can create a large number of components, events, and configurations that need monitoring and investigating.

As with the first two panels of our cloud security triptych, we need to encode our security best practices as rules that can be applied to our assets, so that we can spot, correct and trend policy violations at scale. In addition we need to take the masses of audit data and create appropriate audit reports appropriate to any particular regulatory framework – such as PCI DSS or HIPAA. Finally, we need great tools to investigate potential incidents, and to spot anomalous activity.

Systems for management, monitoring and audit need:

  • Control-plane API integration into cloud environments for discovery and audit of cloud objects and accounts
  • Flexible rule sets to spot policy violations (e.g. logging not turned on for a service, or a container running as root)
  • Intuitive drill-down interfaces to assist investigations during an incident
  • Easy report generation for common regulatory compliance
  • A way to remediate discovered issues within the tool

Putting it all together
Looking at our (certainly incomplete) list of requirements it’s clear that each team operates in a different way to produce secure software. It’s also true that the tools they use share a lot of commonality in function, but are often deployed and accessed very differently.

Developers need tools that integrate with IDE’s and software build orchestrators. Deployment teams need tools to protect running applications on a variety of platforms. And whoever is responsible for the in-productions security and audit compliance need tools to detect events, spot insecure configurations, and report on security posture and compliance.

KubeConCloudNativeCon in Amsterdam, scheduled for March 30 – April 2, has been postponed.