You may be familiar with Adrian Cockroft’s work at Netflix. Under his guidance, that company innovated in numerous ways; none the least of which was around high availability and resilience. While Cockroft was chief cloud architect at Netflix, that company created things like Chaos Monkey, the discipline of chaos engineering, and a whole new business model built around cloud-hosted streaming video services.
Cockroft did such good work at Netflix, he basically eliminated his own position as cloud architect. As Casey Rosenthal, engineering manager at Netflix, put it: The architecture of the company’s software and services is now far too large to fit into any one person’s brain.
Thus, Cockroft moved on to a little venture capital work for a time. But this week, the word came down on where he’s landed: Amazon Web Services. There he will take on the role of chief cloud architect, and one can only expect he’ll be implementing a few changes that will make the old team at Netflix very happy.
While there’s no way of knowing what those changes will be, it’s hard to think of a scenario in which Cockroft doesn’t already have a list of gripes with AWS: Netflix uses and has used the service extensively for some time now.
One area, however, where he might work to improve things at AWS is around visibility. We last spoke to Cockroft this summer to discuss his work with BigPanda, a startup focused on monitoring. He said that monitoring services have become a major (and necessary) center of innovation, thanks in part to the increasing amount of cloud applications using things like Lambdas and other lightweight systems for handling information.
“Every generation of products adds new things,” said Cockroft. “When virtualization came along, monitoring tools didn’t understand machines that had more than one operating system on them. They had to change their systems. They got over that, then a bit later, Docker containers came along and everyone had to figure out how to do containers and microservices. Currently things are going serverless with things like AWS Lambda. There’s nowhere you can install monitoring on a Lambda.
“There are a few emergent ways of doing it. In most cases with Lambda, Amazon will tell you how often your function runs and how many errors it had. They don’t tell you which function calls which other function, and you can’t figure out when function A calls function B, it’s fine, but when function C calls function B, it fails. It’s hard to work out those bits of the system. There are people trying to fix that, but they’re very nascent.
“Lambda is also a component of monitoring systems. Most of the things that happen in AWS, like starting up an instance, you now get a Lambda and you can use that event to check that instance. You can make an API call to AWS to attach a piece of disk space to it, or something like that. There’s a bunch of automation things you can build using Lambda to automate cloud.
“Most recently, with the growth of microservices as an architectural pattern, all the monitoring vendors are now booth-display-compliant: They all have microservices on their client. What we’re seeing now is the things microservices do: They break the application into small pieces. You need to have a map of those pieces. You need to do end-to-end tracing, open tracing, and have a common interface. But basically you need to have a way to trace things across your system.
“On top of that, the pieces are changing continuously. You don’t have one version of the system. It’s changing daily, or many times a day in the extreme cases. When you’re trying to figure out what broke in the system, the important part is to have a fine line to the code to figure out what broken.
“You’re seeing more focus now on trying to correlate code pushes and feature enables back to outages and problems, and people making lots of very small changes rather than trying it to wrap everything into one release.”