Barton Friedland used to work selling Apple computers in the early 1980s, at the very beginning of the desktop revolution, but he struggled to make sales to lawyers. The writer Ian Leslie tells the story:
One day, he was with a lawyer in his office, when he had an epiphany. “I was telling him about all the wonderful things he could do if he had his own computer. Finally, he said, ‘Why would I want one of those on my desk? That’s what they have.’ And he pointed toward the secretary pool.
Thirty-five years later, the battle to sell computers to professionals is over. There’s a computer on almost every desk. No office-based professional, whether a doctor or an accountant, a banker or a lawyer, would expect to do their job without almost constant access to a computer for communication, time-saving calculations, audit trails and much more.
Yet, the 1980s lawyer was not completely wrong. Computers (and typewriters before them) were what the typists and secretaries of the era had. Proper letter-writing was a laborious task. Before computers, a typist often couldn’t correct a mistake and had to start again, so accuracy and speed were essential. Letters had to be properly addressed and then sent to a huge mailroom to be weighed, stamped and dispatched.
Typing pools are largely a thing of the past, because executives are expected to write most of their own letters. Mailrooms have withered as executives are expected to correctly choose their clients’ email addresses. The role of the executive has changed since the 1980s: more independence, but less support. More control, but more potential for error (Oops, the busy doctor just emailed medical records to the wrong patient!).
Software companies have, in their own way, undergone a structurally similar change.
Once upon a time, not very long ago, software companies developed, deployed and operated large, monolithic systems. Coders wrote code, testers tested the code, QA people prodded it with a stick, deployment people put it into production and IT operations teams made sure it was all running properly. Security teams kept the network secure. When a server broke, IT was in charge of fixing or replacing it. When a hole was disclosed in an open-source library, it was security’s job to close it. Coders could focus on code.
Was that past story ever really true? I’m not convinced it was exactly like that, but you get the idea. Developers were part of a big structure that took their code and turned it into something real.
But now the world has changed. The move to cloud did away with companies’ on-premises infrastructure, making the server room go the way of the mailroom. DevOps broke down the barriers between developers, ITOps, testing and QA.
Configuration-as-code moved some traditional operations functions into the hands of developers; infrastructure-as-code and containerization means that developers can create their own servers, and automated continuous integration and Deployment (CI/CD) pipelines are replacing dedicated deployment and testing teams, taking the developers’ code and magicking it into production. In its ultimate form, these trends become the movement known as GitOps, where everything is code and all changes are made by developers in Git repositories.
All of these trends are putting more and more power in the hands of the developer. The developer has become king (or queen), and has more power than ever before.
But with great power comes, yes, great responsibility. Developers aren’t only able to define their own Kubernetes clusters; in many situations, they’re responsible for them, without the safety net of Ops.
Security teams can install all the firewalls they want, but if the developer leaves API keys or other secrets in GitHub repositories, then nothing is safe. A company can install state-of-the-art third-party security agents in their apps, but if a careless coder accidentally wipes only a few lines of code from the Ansible playbook, those expensive tools will never start in the first place.
How should software companies respond? First is to recognize the reality. The developer is king and that’s unlikely to change.
Second is to ensure that ops, security and other important functions aren’t lost in the new dev-led world.
DevSecOps is more than a buzzword; it can point the way to a methodology that acknowledges that security and operations are, at least partly, in the hands of developers now. DevSecOps is about collaboration, not control. Security and ops teams shouldn’t become roadblocks to these developer-led processes. Instead, they need to consider how to ensure that they’re baked into the development process, not bolted on later. They need to become friends, trainers and mentors to developers rather than traffic policemen or border guards. They need to promote security champions inside development teams and integrate best practice at the start of the software development life cycle, not something thrown in at the end or as a roadblock in the middle.
Pulling this all together is automation and enforcement. Automate everything: inline security scans, deploying from Git, securing your code repositories. Make everything seamless from the developer side.
Then, as deployment and testing become automated, it becomes more important to spot mistakes and issues before they enter the pipeline. Policy enforcement solutions are the enablers of DevSecOps, and allow developers to code confidently without the constant worry that they’ll accidentally delete their production database, uninstall security agents or push secrets into repos. But to really make policy tools work, companies also need to be smart about how they define and implement their policies, and that’s where operations and security are key again.
Developers didn’t ask for the keys to the kingdom — they were forced into their reluctant hands by automation, cloud, and everything as code. Now it’s up to the rest of a software organization to act as checks and balances on their new monarchs, using policies, training and enforcement solutions to keep them in line and keep their production systems safe, stable and secure.